IGLib  1.7.2
The IGLib base library EXTENDED - with other lilbraries and applications.
 All Classes Namespaces Files Functions Variables Typedefs Enumerations Enumerator Properties Events Macros
Meta.Numerics.Statistics.Sample Class Reference

Represents a set of data points, where each data point consists of a single real number. More...

+ Inheritance diagram for Meta.Numerics.Statistics.Sample:
+ Collaboration diagram for Meta.Numerics.Statistics.Sample:

Public Member Functions

 Sample ()
 Initializes a new, empty sample. More...
 
 Sample (string name)
 Initializes a new, empty sample with the given name. More...
 
 Sample (IEnumerable< double > values)
 Initializes a new sample from a list of values. More...
 
 Sample (params double[] values)
 Initializes a new sample from a list of values. More...
 
void Add (double value)
 Adds a value to the sample. More...
 
void Add (IEnumerable< double > values)
 Adds multiple values to the sample. More...
 
void Add (params double[] values)
 Adds multiple values to the sample. More...
 
bool Remove (double value)
 Removes a given value from the sample. More...
 
void Clear ()
 Remove all values from the sample. More...
 
void Transform (Func< double, double > transformFunction)
 Transforms all values using a user-supplied function. More...
 
bool Contains (double value)
 Determines whether the sample contains the given value. More...
 
Sample Copy ()
 Copies the sample. More...
 
double Moment (int n)
 Computes the given sample moment. More...
 
double MomentAboutMean (int n)
 Computes the given sample moment about its mean. More...
 
double LeftProbability (double value)
 Gets the fraction of values equal to or less than the given value. More...
 
double InverseLeftProbability (double P)
 Gets the sample value corresponding to a given percentile score. More...
 
UncertainValue PopulationMoment (int n)
 Estimates the given population moment using the sample. More...
 
UncertainValue PopulationMomentAboutMean (int n)
 Estimates the given population moment about the mean using the sample. More...
 
TestResult ZTest (double referenceMean, double referenceStandardDeviation)
 Performs a z-test. More...
 
TestResult StudentTTest (double referenceMean)
 Tests whether the sample mean is compatible with the reference mean. More...
 
TestResult SignTest (double referenceMedian)
 Tests whether the sample median is compatible with the given reference value. More...
 
TestResult KolmogorovSmirnovTest (Distribution distribution)
 Tests whether the sample is compatible with the given distribution. More...
 
TestResult KuiperTest (Distribution distribution)
 Tests whether the sample is compatible with the given distribution. More...
 
IEnumerator< double > GetEnumerator ()
 Gets an enumerator of sample values. More...
 
FitResult MaximumLikelihoodFit (IParameterizedDistribution distribution)
 Performs a maximum likelihood fit. More...
 
void Load (IDataReader reader, int dbIndex)
 Loads values from a data reader. More...
 

Static Public Member Functions

static TestResult StudentTTest (Sample a, Sample b)
 Tests whether one sample mean is compatible with another sample mean. More...
 
static TestResult MannWhitneyTest (Sample a, Sample b)
 Tests whether the sample median is compatible with the mean of another sample. More...
 
static OneWayAnovaResult OneWayAnovaTest (params Sample[] samples)
 Performs a one-way ANOVA. More...
 
static OneWayAnovaResult OneWayAnovaTest (IList< Sample > samples)
 Performs a one-way ANOVA. More...
 
static TestResult KruskalWallisTest (IList< Sample > samples)
 Performs a Kruskal-Wallis test on the given samples. More...
 
static TestResult KruskalWallisTest (params Sample[] samples)
 Performs a Kruskal-Wallis test on the given samples. More...
 
static TestResult KolmogorovSmirnovTest (Sample a, Sample b)
 Tests whether the sample is compatible with another sample. More...
 
static TestResult FisherFTest (Sample a, Sample b)
 Tests whether the variance of two samples is compatible. More...
 

Properties

string Name [get, set]
 Gets or sets the name of the sample. More...
 
bool IsReadOnly [get]
 Gets a value indicating whether the sample is read-only. More...
 
int Count [get]
 Gets the number of values in the sample. More...
 
double Mean [get]
 Gets the sample mean. More...
 
double Variance [get]
 Gets the sample variance. More...
 
double StandardDeviation [get]
 Gets the sample standard deviation. More...
 
double Skewness [get]
 Gets the sample skewness. More...
 
double Median [get]
 Gets the sample median. More...
 
Interval InterquartileRange [get]
 Gets the interquartile range of sample measurmements. More...
 
double Minimum [get]
 Gets the smallest value in the sample. More...
 
double Maximum [get]
 Gets the largest value in the sample. More...
 
UncertainValue PopulationMean [get]
 Gets an estimate of the population mean from the sample. More...
 
UncertainValue PopulationVariance [get]
 Gets an estimate of the population variance from the sample. More...
 
UncertainValue PopulationStandardDeviation [get]
 Gets an estimate of the population standard deviation from the sample. More...
 

Private Member Functions

UncertainValue EstimateFirstCumulant ()
 
UncertainValue EstimateSecondCumulant ()
 
UncertainValue EstimateThirdCumulant ()
 
UncertainValue EstimateCentralMoment (int r)
 
TestResult KolmogorovSmirnovTest (Distribution distribution, int count)
 
void ComputeDStatistics (Distribution distribution, out double D1, out double D2)
 
IEnumerator IEnumerable. GetEnumerator ()
 
void ICollection< double >. CopyTo (double[] array, int start)
 

Private Attributes

SampleStorage data
 
bool isReadOnly
 

Detailed Description

Represents a set of data points, where each data point consists of a single real number.

A univariate sample is a data set which records one number for each independent observation. For example, data from a study which measured the weight of each subject could be stored in the Sample class. The class offers descriptive statistics for the sample, estimates of descriptive statistics of the underlying population distribution, and statistical tests to compare the sample distribution to other sample distributions or theoretical models.

Constructor & Destructor Documentation

Meta.Numerics.Statistics.Sample.Sample ( )
inline

Initializes a new, empty sample.

Meta.Numerics.Statistics.Sample.Sample ( string  name)
inline

Initializes a new, empty sample with the given name.

Parameters
nameThe name of the sample.
Meta.Numerics.Statistics.Sample.Sample ( IEnumerable< double >  values)
inline

Initializes a new sample from a list of values.

Parameters
valuesValues to add to the sample.
Exceptions
ArgumentNullExceptionvalues is null.
Meta.Numerics.Statistics.Sample.Sample ( params double[]  values)
inline

Initializes a new sample from a list of values.

Parameters
valuesValues to add to the sample.

Member Function Documentation

void Meta.Numerics.Statistics.Sample.Add ( double  value)
inline

Adds a value to the sample.

Parameters
valueThe value to add.

Referenced by Test.SampleTest.AnovaDistribution(), Test.SampleTest.AnovaTest(), Test.BivariateSampleTest.BivariateLinearRegression(), Test.BivariateSampleTest.BivariateLinearRegressionGoodnessOfFitDistribution(), Test.MultivariateSampleTest.BivariateNullAssociation(), Test.BivariateSampleTest.BivariatePolynomialRegression(), Test.BugTests.Bug7213(), FutureTest.FutureTest.ChiSquareDistribution(), Test.SampleTest.CreateSample(), Test.SampleTest.ExponentialFitUncertainty(), Test.DistributionTest.FisherTest(), Test.DataSetTest.FitDataToLineChiSquaredTest(), Test.DataSetTest.FitDataToPolynomialChiSquaredTest(), Test.DataSetTest.FitDataToPolynomialUncertaintiesTest(), Test.DistributionTest.GammaFromExponential(), Test.DistributionTest.InverseGaussianSummation(), Test.NullDistributionTests.KendallNullDistributionTest(), Test.NullDistributionTests.KolmogorovNullDistributionTest(), Test.NullDistributionTests.KuiperNullDistributionTest(), Test.SampleTest.LowSampleMoments(), Test.MultivariateSampleTest.MultivariateLinearRegressionNullDistribution(), Test.BivariateSampleTest.PearsonRDistribution(), Test.SampleTest.SampleCopy(), Test.SampleTest.SampleKuiperTest(), Test.SampleTest.SampleManipulations(), Test.SampleTest.SampleMedian(), Test.SampleTest.SignTestDistribution(), Test.NullDistributionTests.SpearmanNullDistributionTest(), Test.DistributionTest.StudentTest(), Test.DistributionTest.StudentTest2(), Test.RandomTest.TimeGammaGenerators(), Test.SampleTest.TTestDistribution(), Test.NullDistributionTests.TwoSampleKolmogorovNullDistributionTest(), FutureTest.FutureTest.TwoSampleKS2(), Test.DistributionTest.UniformOrderStatistics(), Test.SampleTest.WaldFitUncertainties(), and Test.SampleTest.ZTestDistribution().

void Meta.Numerics.Statistics.Sample.Add ( IEnumerable< double >  values)
inline

Adds multiple values to the sample.

Parameters
valuesAn enumerable set of the values to add.
void Meta.Numerics.Statistics.Sample.Add ( params double[]  values)
inline

Adds multiple values to the sample.

Parameters
valuesAn arbitrary number of values.
bool Meta.Numerics.Statistics.Sample.Remove ( double  value)
inline

Removes a given value from the sample.

Parameters
valueThe value to remove.
Returns
True if the value was found and removed, otherwise false.

Referenced by Test.SampleTest.LowSampleMoments(), and Test.SampleTest.SampleManipulations().

void Meta.Numerics.Statistics.Sample.Clear ( )
inline

Remove all values from the sample.

Referenced by Test.SampleTest.SampleManipulations().

void Meta.Numerics.Statistics.Sample.Transform ( Func< double, double >  transformFunction)
inline

Transforms all values using a user-supplied function.

Parameters
transformFunctionThe function used to transform the values, which must not be null.

For example, to replace all values with their logarithms, apply a transform using Math.Log(double).

If the supplied transform function throws an excaption, or returns infinite or NaN values, the transformation may be incomplete or the data corrupted.

Referenced by Test.SampleTest.SampleTransform().

bool Meta.Numerics.Statistics.Sample.Contains ( double  value)
inline

Determines whether the sample contains the given value.

Parameters
valueThe value to check for.
Returns
True if the sample contains value , otherwise false.

Referenced by Test.SampleTest.SampleManipulations().

Sample Meta.Numerics.Statistics.Sample.Copy ( )
inline

Copies the sample.

Returns
An independent copy of the sample.

Referenced by Test.SampleTest.SampleCopy().

double Meta.Numerics.Statistics.Sample.Moment ( int  n)
inline

Computes the given sample moment.

Parameters
nThe order of the moment to compute.
Returns
The n th moment of the sample.

References Meta.Numerics.MoreMath.Pow(), and Meta.Numerics.MoreMath.Sqr().

double Meta.Numerics.Statistics.Sample.MomentAboutMean ( int  n)
inline

Computes the given sample moment about its mean.

Parameters
nThe order of the moment to compute.
Returns
The n th moment about its mean of the sample.

References Meta.Numerics.MoreMath.Pow().

double Meta.Numerics.Statistics.Sample.LeftProbability ( double  value)
inline

Gets the fraction of values equal to or less than the given value.

Parameters
valueThe reference value.
Returns
The fraction of values in the sample that are less than or equal to the given reference value.

Referenced by Test.BugTests.Bug6988().

double Meta.Numerics.Statistics.Sample.InverseLeftProbability ( double  P)
inline

Gets the sample value corresponding to a given percentile score.

Parameters
PThe percentile, which must lie between zero and one.
Returns
The corresponding value.
Exceptions
ArgumentOutOfRangeExceptionP lies outside [0,1].
InsufficientDataExceptionSample.Count is less than two.

Referenced by Test.BugTests.Bug6988(), Meta.Numerics.Statistics.Distributions.WeibullDistribution.FitToSample(), and Test.SampleTest.SampleInterquartileRange().

UncertainValue Meta.Numerics.Statistics.Sample.PopulationMoment ( int  n)
inline

Estimates the given population moment using the sample.

Parameters
nThe order of the moment.
Returns
An estimate of the n th moment of the population.

Referenced by Test.SampleTest.SampleMoments().

UncertainValue Meta.Numerics.Statistics.Sample.PopulationMomentAboutMean ( int  n)
inline

Estimates the given population moment about the mean using the sample.

Parameters
nThe order of the moment.
Returns
An estimate, with uncertainty, of the n th moment about the mean of the underlying population.
Exceptions
ArgumentOutOfRangeExceptionn is negative.

Referenced by Test.SampleTest.SampleMoments(), Test.SampleTest.SamplePopulationMomentEstimateVariances(), Test.SampleTest.SignTestDistribution(), and Test.NullDistributionTests.TwoSampleKolmogorovNullDistributionTest().

UncertainValue Meta.Numerics.Statistics.Sample.EstimateFirstCumulant ( )
inlineprivate
UncertainValue Meta.Numerics.Statistics.Sample.EstimateSecondCumulant ( )
inlineprivate
UncertainValue Meta.Numerics.Statistics.Sample.EstimateThirdCumulant ( )
inlineprivate
UncertainValue Meta.Numerics.Statistics.Sample.EstimateCentralMoment ( int  r)
inlineprivate
TestResult Meta.Numerics.Statistics.Sample.ZTest ( double  referenceMean,
double  referenceStandardDeviation 
)
inline

Performs a z-test.

Parameters
referenceMeanThe mean of the comparison population.
referenceStandardDeviationThe standard deviation of the comparison population.
Returns
A test result indicating whether the sample mean is significantly different from that of the comparison population.

A z-test determines whether the sample is compatible with a normal population with known mean and standard deviation. In most cases, Student's t-test (StudentTTest(double)), which does not assume a known population standard deviation, is more appropriate.

Suppose a standardized test exists, for which it is known that the mean score is 100 and the standard deviation is 15 across the entire population. The test is administered to a small sample of a subpopulation, who obtain a mean sample score of 95. You can use the z-test to determine how likely it is that the subpopulation mean really is lower than the population mean, that is that their slightly lower mean score in your sample is not merely a fluke.

Exceptions
InsufficientDataExceptionSample.Count is zero.
See also
StudentTTest(double)

Referenced by Test.SampleTest.ZTestDistribution().

TestResult Meta.Numerics.Statistics.Sample.StudentTTest ( double  referenceMean)
inline

Tests whether the sample mean is compatible with the reference mean.

Parameters
referenceMeanThe reference mean.
Returns
The result of the test. The test statistic is a t-value. If t > 0, the one-sided likelyhood to obtain a greater value under the null hypothesis is the (right) propability of that value. If t < 0, the corresponding one-sided likelyhood is the (left) probability of that value. The two-sided likelyhood to obtain a t-value as far or farther from zero as the value obtained is just twice the one-sided likelyhood.

The test statistic of Student's t-test is the difference between the sample mean and the reference mean, measured in units of the sample mean uncertainty. Under the null hypothesis that the sample was drawn from a normally distributed population with the given reference mean, this statistic can be shown to follow a Student distribution (StudentDistribution). If t is far from zero, with correspondingly small left or right tail probability, then the sample is unlikely to have been drawn from a population with the given reference mean.

Because the distribution of a t-statistic assumes a normally distributed population, this test should only be used only on sample data compatible with a normal distribution. The sign test (SignTest) is a non-parametric alternative that can be used to test the compatibility of the sample median with an assumed population median.

In some country, the legal limit blood alcohol limit for drivers is 80 on some scale. Because they have noticed that the results given by their measuring device fluctuate, the police perform three seperate measurements on a suspected drunk driver. They obtain the results 81, 84, and 93. They argue that, because all three results exceed the limit, the court should be very confident that the driver's blood alcohol level did, in fact, exceed the legal limit. You are the driver's lawyer. Can you make an argument to that the court shouldn't be so sure?

Here is some code that computes the probability of obtaining such high measured values, assuming that the true level is exactly 80.

Sample values = new Sample();
values.Add(81, 84, 93);
TestResult result = values.StudentTTest(80);
return(result.RightProbability);

What level of statistical confidence do you think should a court require in order to pronounce a defendant guilty?

Exceptions
InsufficientDataExceptionThere are fewer than two data points in the sample.
See also
StudentDistribution

Referenced by Test.SampleTest.AnovaStudentAgreement(), and Test.SampleTest.TTestDistribution().

TestResult Meta.Numerics.Statistics.Sample.SignTest ( double  referenceMedian)
inline

Tests whether the sample median is compatible with the given reference value.

Parameters
referenceMedianThe reference median.
Returns
The result of the test.

The sign test is a non-parametric alternative to the Student t-test (StudentTTest(double)). It tests whether the sample is consistent with the given refernce median.

The null hypothesis for the test is that the median of the underlying population from which the sample is drawn is the reference median. The test statistic is simply number of sample values that lie above the median. Since each sample value is equally likely to be below or above the population median, each draw is an independent Bernoulli trial, and the total number of values above the population median is distributed accordng to a binomial distribution (BinomialDistribution).

The left probability of the test result is the chance of the sample median being so low, assuming the sample to have been drawn from a population with the reference median. The right probability of the test result is the chance of the sample median being so high, assuming the sample to have been drawn from a population with the reference median.

See also
StudentTTest(double)

Referenced by Test.SampleTest.SignTestDistribution().

static TestResult Meta.Numerics.Statistics.Sample.StudentTTest ( Sample  a,
Sample  b 
)
inlinestatic

Tests whether one sample mean is compatible with another sample mean.

Parameters
aThe first sample, which must contain at least two entries.
bThe second sample, which must contain at least two entries.
Returns
The result of an (equal-variance) Student t-test.

References Meta.Numerics.Statistics.Sample.Count, and Meta.Numerics.Statistics.Sample.Mean.

static TestResult Meta.Numerics.Statistics.Sample.MannWhitneyTest ( Sample  a,
Sample  b 
)
inlinestatic

Tests whether the sample median is compatible with the mean of another sample.

Parameters
aThe fisrt sample.
bThe second sample.
Returns
The result of the test.

The Mann-Whitney test is a non-parametric alternative to Student's t-test. Essentially, it supposes that the medians of the two samples are equal and tests the likelihood of this null hypothesis. Unlike the t-test, it does not assume that the sample distributions are normal.

References Meta.Numerics.Statistics.Sample.Count, Meta.Numerics.Statistics.Sample.data, and Meta.Numerics.Functions.AdvancedIntegerMath.LogFactorial().

Referenced by Test.SampleTest.SampleMannWhitneyTest().

static OneWayAnovaResult Meta.Numerics.Statistics.Sample.OneWayAnovaTest ( params Sample[]  samples)
inlinestatic

Performs a one-way ANOVA.

Parameters
samplesThe samples to compare.
Returns
The result of an F-test comparing the between-group variance to the within-group variance.

The one-way ANOVA is an extension of the Student t-test (StudentTTest(Sample,Sample)) to more than two groups. The test's null hypothesis is that all the groups' data are drawn from the same distribution. If the null hypothesis is rejected, it indicates that at least one of the groups differs significantly from the others.

Given more than two groups, you should use an ANOVA to test for differences in the means of the groups rather than perform multiple t-tests. The reason is that each t-test incurs a small risk of a false positive, so multiple t-tests increase the total risk of a false positive. For example, given a 95% confidence requirement, there is only a 5% chance that a t-test will incorrectly diagnose a significant difference. But given 5 samples, there are 5 * 4 /2 = 10 t-tests to be performed, giving about a 40% chance that at least one of them will incorrectly diagnose a significant difference! The ANOVA avoids the accumulation of risk by performing a single test at the required confidence level to test for any significant differences between the groups.

A one-way ANOVA performed on just two samples is equivilent to a t-test (Sample.StudentTTest(Sample,Sample)).

ANOVA is an acronym for "Analysis of Variance". Do not be confused by the name and by the use of a ratio-of-variances test statistic: an ANOVA is primarily (although not exclusively) sensitive to changes in the mean between samples. The variances being compared by the test are not the variances of the individual samples; instead the test is comparing the variance of all samples considered together as one single sample to the variances of the samples considered individually. If the means of some groups differ significantly, then the variance of the unified sample will be much larger than the vairiances of the individual samples, and the test will signal a significant difference. Thus the test uses variance as a tool to detect shifts in mean, not because it is interesed in the individual sample variances per se.

ANOVA is most appropriate when the sample data are approximately normal and the samples are distinguished by a nominal variable. For example, given a random sampling of the ages of members of five different political parties, a one-way ANOVA would be an appropriate test of the whether the different parties tend to attract different-aged memberships.

On the other hand, given data on the incomes and vacation lengths of a large number of people, dividing the people into five income quintiles and performing a one-way ANOVA to compare the vacation day distribution of each quintile would not be an appropriate way to test the hypothesis that richer people take longer vacations. Since income is a cardinal variable, it would be better to in this case of put the data into a BivariateSample and perform a test of association, such as a BivariateSample.PearsonRTest, BivariateSample.SpearmanRhoTest, or BivariateSample.KendallTauTest between the two variables. If you have measurements of additional variables for each indiviual, a MultivariateSample.LinearRegression(int) analysis would allow you to adjust for confounding effects of the other variables.

Referenced by Test.SampleTest.AnovaDistribution(), Test.SampleTest.AnovaStudentAgreement(), and Test.SampleTest.AnovaTest().

static OneWayAnovaResult Meta.Numerics.Statistics.Sample.OneWayAnovaTest ( IList< Sample samples)
inlinestatic

Performs a one-way ANOVA.

Parameters
samplesThe samples to compare.
Returns
The result of the test.

For detailed information, see the variable argument overload.

References Meta.Numerics.MoreMath.Sqr().

static TestResult Meta.Numerics.Statistics.Sample.KruskalWallisTest ( IList< Sample samples)
inlinestatic

Performs a Kruskal-Wallis test on the given samples.

Parameters
samplesThe set of samples to compare.
Returns
The result of the test.

Kruskal-Wallis tests for differences between the samples. It is a non-parametric alternative to the one-way ANOVA (OneWayAnovaTest(Sample[])).

The test is essentially a one-way ANOVA performed on the ranks of sample values instead of the sample values themselves.

A Kruskal-Wallis test on two samples is equivilent to a Mann-Whitney test (see MannWhitneyTest).

Referenced by Test.SampleTest.KruskalWallis().

static TestResult Meta.Numerics.Statistics.Sample.KruskalWallisTest ( params Sample[]  samples)
inlinestatic

Performs a Kruskal-Wallis test on the given samples.

Parameters
samplesThe set of samples to compare.
Returns
The result of the test.
TestResult Meta.Numerics.Statistics.Sample.KolmogorovSmirnovTest ( Distribution  distribution)
inline

Tests whether the sample is compatible with the given distribution.

Parameters
distributionThe test distribution.
Returns
The test result. The test statistic is the D statistic and the likelyhood is the right probability to obtain a value of D as large or larger than the one obtained.

The null hypothesis of the KS test is that the sample is drawn from the given continuous distribution. The test statsitic D is the maximum deviation of the sample's emperical distribution function (EDF) from the distribution's cumulative distribution function (CDF). A high value of the test statistic, corresponding to a low right tail probability, indicates that the sample distribution disagrees with the given distribution to a degree unlikely to arise from statistical fluctuations.

For small sample sizes, we compute the null distribution of D exactly. For large sample sizes, we use an accurate asympotitc approximation. Therefore it is safe to use this method for all sample sizes.

Exceptions
ArgumentNullExceptiondistribution is null.
See also
KolmogorovDistribution

Referenced by Test.SampleTest.AnovaDistribution(), Test.BivariateSampleTest.BivariateLinearRegression(), Test.BivariateSampleTest.BivariateLinearRegressionGoodnessOfFitDistribution(), Test.MultivariateSampleTest.BivariateNullAssociation(), Test.DistributionTest.DistributionRandomDeviates(), Test.DataSetTest.FitDataToLineChiSquaredTest(), Test.DataSetTest.FitDataToPolynomialChiSquaredTest(), Meta.Numerics.Statistics.Distributions.NormalDistribution.FitToSample(), Test.DistributionTest.GammaFromExponential(), Test.DistributionTest.InverseGaussianSummation(), Test.NullDistributionTests.KolmogorovNullDistributionTest(), Test.NullDistributionTests.KuiperNullDistributionTest(), Test.MultivariateSampleTest.MultivariateLinearRegressionNullDistribution(), Test.BivariateSampleTest.PearsonRDistribution(), Test.SampleTest.SampleComparisonTest(), Test.SampleTest.SampleKolmogorovSmirnovTest(), Test.SampleTest.SampleKuiperTest(), Test.DistributionTest.StudentTest(), Test.DistributionTest.StudentTest2(), Test.RandomTest.TimeGammaGenerators(), Test.SampleTest.TTestDistribution(), Test.NullDistributionTests.TwoSampleKolmogorovNullDistributionTest(), FutureTest.FutureTest.TwoSampleKS2(), Test.DistributionTest.UniformOrderStatistics(), and Test.SampleTest.ZTestDistribution().

TestResult Meta.Numerics.Statistics.Sample.KolmogorovSmirnovTest ( Distribution  distribution,
int  count 
)
inlineprivate
TestResult Meta.Numerics.Statistics.Sample.KuiperTest ( Distribution  distribution)
inline

Tests whether the sample is compatible with the given distribution.

Parameters
distributionThe test distribution.
Returns
The test result. The test statistic is the V statistic and the likelyhood is the right probability to obtain a value of V as large or larger than the one obtained.

Referenced by FutureTest.FutureTest.ChiSquareDistribution(), Test.DistributionTest.FisherTest(), Test.NullDistributionTests.KolmogorovNullDistributionTest(), Test.NullDistributionTests.KuiperNullDistributionTest(), Test.SampleTest.SampleKuiperTest(), and Test.NullDistributionTests.SpearmanNullDistributionTest().

void Meta.Numerics.Statistics.Sample.ComputeDStatistics ( Distribution  distribution,
out double  D1,
out double  D2 
)
inlineprivate
static TestResult Meta.Numerics.Statistics.Sample.KolmogorovSmirnovTest ( Sample  a,
Sample  b 
)
inlinestatic

Tests whether the sample is compatible with another sample.

Parameters
bThe other sample.
Returns
The test result. The test statistic is the D statistic and the likelyhood is the right probability to obtain a value of D as large or larger than the one obtained.
Exceptions
ArgumentNullExceptionb is null.

References Meta.Numerics.Functions.AdvancedIntegerMath.BinomialCoefficient(), Meta.Numerics.Statistics.Sample.Count, Meta.Numerics.Statistics.Sample.data, and Meta.Numerics.Interval.FromEndpoints().

static TestResult Meta.Numerics.Statistics.Sample.FisherFTest ( Sample  a,
Sample  b 
)
inlinestatic

Tests whether the variance of two samples is compatible.

Parameters
aThe first sample.
bThe second sample.
Returns
The result of the test.

References Meta.Numerics.Statistics.Sample.Count, and Meta.Numerics.Statistics.Sample.Variance.

Referenced by Test.SampleTest.SampleFisherFTest().

IEnumerator<double> Meta.Numerics.Statistics.Sample.GetEnumerator ( )
inline

Gets an enumerator of sample values.

Returns
An enumerator of sample values.
IEnumerator IEnumerable. Meta.Numerics.Statistics.Sample.GetEnumerator ( )
inlineprivate
void ICollection<double>. Meta.Numerics.Statistics.Sample.CopyTo ( double[]  array,
int  start 
)
inlineprivate
void Meta.Numerics.Statistics.Sample.Load ( IDataReader  reader,
int  dbIndex 
)
inline

Loads values from a data reader.

Parameters
readerThe data reader.
dbIndexThe column number.

Member Data Documentation

SampleStorage Meta.Numerics.Statistics.Sample.data
private
bool Meta.Numerics.Statistics.Sample.isReadOnly
private

Property Documentation

string Meta.Numerics.Statistics.Sample.Name
getset

Gets or sets the name of the sample.

Referenced by Test.SampleTest.SamplePopulationMomentEstimateVariances().

bool Meta.Numerics.Statistics.Sample.IsReadOnly
get

Gets a value indicating whether the sample is read-only.

double Meta.Numerics.Statistics.Sample.Variance
get
double Meta.Numerics.Statistics.Sample.StandardDeviation
get

Gets the sample standard deviation.

Note this is the actual standard deviation of the sample values, not the infered standard deviation of the underlying population; to obtain the latter use PopulationStandardDeviation.

Referenced by Test.DataSetTest.FitDataToPolynomialUncertaintiesTest().

double Meta.Numerics.Statistics.Sample.Skewness
get

Gets the sample skewness.

Skewness is the third central moment, measured in units of the appropriate power of the standard deviation.

double Meta.Numerics.Statistics.Sample.Median
get

Gets the sample median.

Referenced by Test.SampleTest.SampleMedian().

Interval Meta.Numerics.Statistics.Sample.InterquartileRange
get

Gets the interquartile range of sample measurmements.

The interquartile range is the interval between the 25th and the 75th percentile.

See also
InverseLeftProbability

Referenced by Test.SampleTest.SampleInterquartileRange().

double Meta.Numerics.Statistics.Sample.Minimum
get
double Meta.Numerics.Statistics.Sample.Maximum
get

Gets the largest value in the sample.

Referenced by Test.SampleTest.SampleMedian(), and Test.SampleTest.SampleTransform().


The documentation for this class was generated from the following file: