List of usage examples for java.lang Double NaN
double NaN
To view the source code for java.lang Double NaN.
Click Source Link
From source file:org.jfree.data.xy.DefaultOHLCDataset.java
/** * Returns the high-value (as a double primitive) for an item within a * series./* w w w. jav a2s.c o m*/ * * @param series the series (zero-based index). * @param item the item (zero-based index). * * @return The high-value. */ @Override public double getHighValue(int series, int item) { double result = Double.NaN; Number high = getHigh(series, item); if (high != null) { result = high.doubleValue(); } return result; }
From source file:edu.cornell.med.icb.learning.weka.WekaClassifier.java
public double predict(final ClassificationModel trainingModel, final ClassificationProblem problem, final int instanceIndex) { assert trainingModel instanceof WekaModel : "Model must be a weka model."; try {/*from w ww . j a va2s. com*/ return labelIndex2LabelValue[(int) getWekaClassifier(this) .classifyInstance(getWekaProblem(problem).instance(instanceIndex))]; } catch (Exception e) { LOG.error("Weka classifier has thrown exception.", e); return Double.NaN; } }
From source file:edu.cudenver.bios.power.glmm.GLMMTestWilksLambda.java
/** * Calculate the non-centrality parameter for the WL, based on * whether the null or alternative hypothesis is assumed true. * //from w w w.j av a2s .c o m * @param type distribution type * @return non-centrality parameter * @throws IllegalArgumentException */ @Override public double getNonCentrality(DistributionType type) { // calculate the hypothesis and error sum of squares matrices RealMatrix hypothesisSumOfSquares = getHypothesisSumOfSquares(); RealMatrix errorSumOfSquares = getErrorSumOfSquares(); // a = #rows in between subject contrast matrix, C double a = C.getRowDimension(); // b = #columns in within subject contrast matrix, U double b = U.getColumnDimension(); double s = (a < b) ? a : b; double p = beta.getColumnDimension(); double adjustedW = Double.NaN; double g = Double.NaN; double W = getWilksLambda(hypothesisSumOfSquares, errorSumOfSquares, type); if (a * a * b * b <= 4) { g = 1; adjustedW = W; } else { g = Math.sqrt((a * a * b * b - 4) / (a * a + b * b - 5)); adjustedW = Math.pow(W, 1 / g); } double omega; if ((s == 1 && p > 1) || fMethod == FApproximation.RAO_TWO_MOMENT_OMEGA_MULT) { omega = totalN * g * (1 - adjustedW) / adjustedW; } else { omega = getDenominatorDF(type) * (1 - adjustedW) / adjustedW; } if (Math.abs(omega) < TOLERANCE) omega = 0; return Math.abs(omega); }
From source file:Main.java
/** * Return <code>d</code> × * 2<sup><code>scale_factor</code></sup> rounded as if performed * by a single correctly rounded floating-point multiply to a * member of the double value set. See <a * href="http://java.sun.com/docs/books/jls/second_edition/html/typesValues.doc.html#9208">§4.2.3</a> * of the <a href="http://java.sun.com/docs/books/jls/html/">Java * Language Specification</a> for a discussion of floating-point * value sets. If the exponent of the result is between the * <code>double</code>'s minimum exponent and maximum exponent, * the answer is calculated exactly. If the exponent of the * result would be larger than <code>doubles</code>'s maximum * exponent, an infinity is returned. Note that if the result is * subnormal, precision may be lost; that is, when <code>scalb(x, * n)</code> is subnormal, <code>scalb(scalb(x, n), -n)</code> may * not equal <i>x</i>. When the result is non-NaN, the result has * the same sign as <code>d</code>. * *<p>//from w w w . jav a 2 s.c o m * Special cases: * <ul> * <li> If the first argument is NaN, NaN is returned. * <li> If the first argument is infinite, then an infinity of the * same sign is returned. * <li> If the first argument is zero, then a zero of the same * sign is returned. * </ul> * * @param d number to be scaled by a power of two. * @param scale_factor power of 2 used to scale <code>d</code> * @return <code>d * </code>2<sup><code>scale_factor</code></sup> * @author Joseph D. Darcy */ public static double scalb(double d, int scale_factor) { /* * This method does not need to be declared strictfp to * compute the same correct result on all platforms. When * scaling up, it does not matter what order the * multiply-store operations are done; the result will be * finite or overflow regardless of the operation ordering. * However, to get the correct result when scaling down, a * particular ordering must be used. * * When scaling down, the multiply-store operations are * sequenced so that it is not possible for two consecutive * multiply-stores to return subnormal results. If one * multiply-store result is subnormal, the next multiply will * round it away to zero. This is done by first multiplying * by 2 ^ (scale_factor % n) and then multiplying several * times by by 2^n as needed where n is the exponent of number * that is a covenient power of two. In this way, at most one * real rounding error occurs. If the double value set is * being used exclusively, the rounding will occur on a * multiply. If the double-extended-exponent value set is * being used, the products will (perhaps) be exact but the * stores to d are guaranteed to round to the double value * set. * * It is _not_ a valid implementation to first multiply d by * 2^MIN_EXPONENT and then by 2 ^ (scale_factor % * MIN_EXPONENT) since even in a strictfp program double * rounding on underflow could occur; e.g. if the scale_factor * argument was (MIN_EXPONENT - n) and the exponent of d was a * little less than -(MIN_EXPONENT - n), meaning the final * result would be subnormal. * * Since exact reproducibility of this method can be achieved * without any undue performance burden, there is no * compelling reason to allow double rounding on underflow in * scalb. */ // magnitude of a power of two so large that scaling a finite // nonzero value by it would be guaranteed to over or // underflow; due to rounding, scaling down takes takes an // additional power of two which is reflected here final int MAX_SCALE = DoubleConsts.MAX_EXPONENT + -DoubleConsts.MIN_EXPONENT + DoubleConsts.SIGNIFICAND_WIDTH + 1; int exp_adjust = 0; int scale_increment = 0; double exp_delta = Double.NaN; // Make sure scaling factor is in a reasonable range if (scale_factor < 0) { scale_factor = Math.max(scale_factor, -MAX_SCALE); scale_increment = -512; exp_delta = twoToTheDoubleScaleDown; } else { scale_factor = Math.min(scale_factor, MAX_SCALE); scale_increment = 512; exp_delta = twoToTheDoubleScaleUp; } // Calculate (scale_factor % +/-512), 512 = 2^9, using // technique from "Hacker's Delight" section 10-2. int t = (scale_factor >> 9 - 1) >>> 32 - 9; exp_adjust = ((scale_factor + t) & (512 - 1)) - t; d *= powerOfTwoD(exp_adjust); scale_factor -= exp_adjust; while (scale_factor != 0) { d *= exp_delta; scale_factor -= scale_increment; } return d; }
From source file:edu.cmu.tetrad.regression.LogisticRegression2.java
public void regress(int[] target, int numValues, double[][] regressors) { try {/*w ww . j a va2 s . c o m*/ int numParams = regressors.length + 1; double[] coefficients = new double[(numValues - 1) * numParams]; // Apparently this needs to be fairly loose. int tolerance = 250; MultivariateOptimizer search = new PowellOptimizer(tolerance, tolerance); PointValuePair pair = search.optimize(new InitialGuess(coefficients), new ObjectiveFunction(new FittingFunction(target, regressors)), GoalType.MAXIMIZE, new MaxEval(1000000)); this.likelihood = pair.getValue(); } catch (TooManyEvaluationsException e) { e.printStackTrace(); this.likelihood = Double.NaN; } }
From source file:edu.toronto.cs.phenotips.measurements.internal.AbstractMeasurementHandler.java
@Override public double percentileToValue(boolean male, int ageInMonths, int targetPercentile) { LMS lms = getLMSForAge(getLMSList(male), ageInMonths); if (lms == null) { return Double.NaN; }/*from w w w . j a v a 2s .c o m*/ return percentileToValue(targetPercentile, lms.m, lms.l, lms.s); }
From source file:de.taimos.gpsd4java.backend.LegacyResultParser.java
private IGPSObject parseIONO(final JSONObject json) { IGPSObject gps;// w w w . j a v a2 s. c om final IONOObject iono = new IONOObject(); iono.setAlpha0(json.optDouble("a0", Double.NaN)); iono.setAlpha1(json.optDouble("a1", Double.NaN)); iono.setAlpha2(json.optDouble("a2", Double.NaN)); iono.setAlpha3(json.optDouble("a3", Double.NaN)); iono.setBeta0(json.optDouble("b0", Double.NaN)); iono.setBeta1(json.optDouble("b1", Double.NaN)); iono.setBeta2(json.optDouble("b2", Double.NaN)); iono.setBeta3(json.optDouble("b3", Double.NaN)); iono.setA0(json.optDouble("A0", Double.NaN)); iono.setA1(json.optDouble("A1", Double.NaN)); iono.setTot(json.optDouble("tot", Double.NaN)); iono.setWNt(json.optInt("WNt")); iono.setLeap(json.optInt("ls")); iono.setWNlsf(json.optInt("WNlsf")); iono.setDN(json.optInt("DN")); iono.setLsf(json.optInt("lsf")); gps = iono; return gps; }
From source file:com.cloudera.oryx.kmeans.computation.local.Standarize.java
private static double asDouble(String token) { try {/*from w ww . j a v a 2 s. c o m*/ return Double.valueOf(token); } catch (NumberFormatException e) { log.warn("Invalid numeric token: {}", token); return Double.NaN; } }
From source file:com.ipeirotis.gal.core.CategoryPair.java
/** * Makes the matrix to be row-stochastic: In other words, for a given "from" * category, if we sum the errors across all the "to" categories, we get 1.0 *//*from w w w.j av a2 s .co m*/ public void normalize() { for (String from : this.categories) { double from_marginal = 0.0; for (String to : this.categories) { from_marginal += getErrorRate(from, to); } for (String to : this.categories) { double error = getErrorRate(from, to); double error_rate; // If the marginal across the "from" category is 0 // this means that the worker has not even seen an object of the // "from" // category. In this case, we set the value to NaN if (from_marginal == 0.0) { error_rate = Double.NaN; } else { error_rate = error / from_marginal; } setErrorRate(from, to, error_rate); } } }
From source file:edu.harvard.iq.dataverse.util.SumStatCalculator.java
private static double calculateMedian(double[] values) { double[] sorted = new double[values.length]; System.arraycopy(values, 0, sorted, 0, values.length); logger.fine("made an extra copy of the vector;"); Arrays.sort(sorted);//from ww w. jav a 2 s. co m logger.fine("sorted double vector for median calculations;"); if (sorted.length == 0) { return Double.NaN; } if (sorted.length == 1) { return sorted[0]; // always return single value for n = 1 } double n = sorted.length; double pos = (n + 1) / 2; double fpos = Math.floor(pos); int intPos = (int) fpos; double dif = pos - fpos; double lower = sorted[intPos - 1]; double upper = sorted[intPos]; return lower + dif * (upper - lower); }