Example usage for java.util LinkedList peekFirst

List of usage examples for java.util LinkedList peekFirst

Introduction

In this page you can find the example usage for java.util LinkedList peekFirst.

Prototype

public E peekFirst() 

Source Link

Document

Retrieves, but does not remove, the first element of this list, or returns null if this list is empty.

Usage

From source file:org.openconcerto.sql.model.SQLTable.java

static private void fireTableModified(DispatchingState newTuple) {
    final LinkedList<DispatchingState> linkedList = events.get();
    // add new event
    linkedList.addLast(newTuple);//from ww w .  j  a  v a  2  s  .  c om
    // process all pending events
    DispatchingState currentTuple;
    while ((currentTuple = linkedList.peekFirst()) != null) {
        final Iterator<SQLTableModifiedListener> iter = currentTuple.get0();
        final SQLTableEvent currentEvt = currentTuple.get1();
        while (iter.hasNext()) {
            final SQLTableModifiedListener l = iter.next();
            l.tableModified(currentEvt);
        }
        // not removeFirst() since the item might have been already removed
        linkedList.pollFirst();
    }
}

From source file:com.moorestudio.seniorimageprocessing.SeniorSorter.java

public void sortImages() {
    LinkedList<Map.Entry<String, Long>> timestampList = new LinkedList<>(timestampData.entrySet());
    sort(timestampList, (x, y) -> x.getValue() > y.getValue() ? -1 : x.getValue().equals(y.getValue()) ? 0 : 1);
    // Sort in reverse so that the most recent timestamps are first.e so that the most recent timestamps are first.

    LinkedList<Map.Entry<File, Long>> imageDataList = new LinkedList<>(imageData.entrySet());
    sort(imageDataList, (x, y) -> x.getValue() > y.getValue() ? -1 : x.getValue().equals(y.getValue()) ? 0 : 1); // Sort in reverse so that the most recent timestamps are first.

    // For the gui update
    int idCount = imageDataList.size();

    //Take the first image and the first timestamp scan taken, which is last in the list, 
    //and sync the camera time to the timestamp time.Both are throwaways.
    if (!timestampList.isEmpty() && !imageDataList.isEmpty() && parent.syncTime) {
        Map.Entry<File, Long> iData = imageDataList.pollLast();
        Map.Entry<String, Long> tsData = timestampList.pollLast();

        //Make the offset
        cameraTimeOffset = tsData.getValue() - iData.getValue();
    }/*from  w  w  w .  j  a v  a2  s.  c om*/

    //add the file to the top timestamp student until it is no longer more than it
    while (!timestampList.isEmpty() && !imageDataList.isEmpty()) {
        Map.Entry<File, Long> iData = imageDataList.peekFirst();
        Map.Entry<String, Long> tsData = timestampList.pollFirst();
        ArrayList<File> studentImages = new ArrayList<>();
        while (!imageDataList.isEmpty() && iData.getValue() + cameraTimeOffset > tsData.getValue()) {
            iData = imageDataList.pollFirst();
            studentImages.add(iData.getKey());
            iData = imageDataList.peekFirst();
            //update the GUI
            parent.addProgress((.125 / parent.numThreads) / idCount);
        }
        if (!studentImages.isEmpty()) {
            parent.addImagesToStudent(tsData.getKey(), studentImages);
        }
    }

    //add the unsorted images to the parent's unsorted queue
    for (Map.Entry<File, Long> entry : imageDataList) {
        parent.unsortedFiles.add(entry.getKey());
        //update the GUI
        parent.addProgress((.125 / parent.numThreads) / idCount);
    }
}

From source file:com.act.lcms.v2.fullindex.Builder.java

protected void extractTriples(Iterator<LCMSSpectrum> iter, List<MZWindow> windows)
        throws RocksDBException, IOException {
    /* Warning: this method makes heavy use of ByteBuffers to perform memory efficient collection of values and
     * conversion of those values into byte arrays that RocksDB can consume.  If you haven't already, go read this
     * tutorial on ByteBuffers: http://mindprod.com/jgloss/bytebuffer.html
     *//w  ww .  j a va2s . c o  m
     * ByteBuffers are quite low-level structures, and they use some terms you need to watch out for:
     *   capacity: The total number of bytes in the array backing the buffer.  Don't write more than this.
     *   position: The next index in the buffer to read or write a byte.  Moves with each read or write op.
     *   limit:    A mark of where the final byte in the buffer was written.  Don't read past this.
     *             The remaining() call is affected by the limit.
     *   mark:     Ignore this for now, we don't use it.  (We'll always, always read buffers from 0.)
     *
     * And here are some methods that we'll use often:
     *   clear:     Set position = 0, limit = 0.  Pretend the buffer is empty, and is ready for more writes.
     *   flip:      Set limit = position, then position = 0.  This remembers how many bytes were written to the buffer
     *              (as the current position), and then puts the position at the beginning.
     *              Always call this after the write before a read.
     *   rewind:    Set position = 0.  Buffer is ready for reading, but unless the limit was set we might now know how
     *              many bytes there are to read.  Always call flip() before rewind().  Can rewind many times to re-read
     *              the buffer repeatedly.
     *   remaining: How many bytes do we have left to read?  Requires an accurate limit value to avoid garbage bytes.
     *   reset:     Don't use this.  It uses the mark, which we don't need currently.
     *
     * Write/read patterns look like:
     *   buffer.clear(); // Clear out anything already in the buffer.
     *   buffer.put(thing1).put(thing2)... // write a bunch of stuff
     *   buffer.flip(); // Prep for reading.  Call *once*!
     *
     *   while (buffer.hasRemaining()) { buffer.get(); } // Read a bunch of stuff.
     *   buffer.rewind(); // Ready for reading again!
     *   while (buffer.hasRemaining()) { buffer.get(); } // Etc.
     *   buffer.reset(); // Forget what was written previously, buffer is ready for reuse.
     *
     * We use byte buffers because they're fast, efficient, and offer incredibly convenient means of serializing a
     * stream of primitive types to their minimal binary representations.  The same operations on objects + object
     * streams require significantly more CPU cycles, consume more memory, and tend to be brittle (i.e. if a class
     * definition changes slightly, serialization may break).  Since the data we're dealing with is pretty simple, we
     * opt for the low-level approach.
     */

    /* Because we'll eventually use the window indices to map a mz range to a list of triples that fall within that
     * range, verify that all of the indices are unique.  If they're not, we'll end up overwriting the data in and
     * corrupting the structure of the index. */
    ensureUniqueMZWindowIndices(windows);

    // For every mz window, allocate a buffer to hold the indices of the triples that fall in that window.
    ByteBuffer[] mzWindowTripleBuffers = new ByteBuffer[windows.size()];
    for (int i = 0; i < mzWindowTripleBuffers.length; i++) {
        /* Note: the mapping between these buffers and their respective mzWindows is purely positional.  Specifically,
         * mzWindows.get(i).getIndex() != i, but mzWindowTripleBuffers[i] belongs to mzWindows.get(i).  We'll map windows
         * indices to the contents of mzWindowTripleBuffers at the very end of this function. */
        mzWindowTripleBuffers[i] = ByteBuffer.allocate(Long.BYTES * 4096); // Start with 4096 longs = 8 pages per window.
    }

    // Every TMzI gets an index which we'll use later when we're querying by m/z and time.
    long counter = -1; // We increment at the top of the loop.
    // Note: we could also write to an mmapped file and just track pointers, but then we might lose out on compression.

    // We allocate all the buffers strictly here, as we know how many bytes a long and a triple will take.  Then reuse!
    ByteBuffer counterBuffer = ByteBuffer.allocate(Long.BYTES);
    ByteBuffer valBuffer = ByteBuffer.allocate(TMzI.BYTES);
    List<Float> timepoints = new ArrayList<>(2000); // We can be sloppy here, as the count is small.

    /* We use a sweep-line approach to scanning through the m/z windows so that we can aggregate all intensities in
     * one pass over the current LCMSSpectrum (this saves us one inner loop in our extraction process).  The m/z
     * values in the LCMSSpectrum become our "critical" or "interesting points" over which we sweep our m/z ranges.
     * The next window in m/z order is guaranteed to be the next one we want to consider since we address the points
     * in m/z order as well.  As soon as we've passed out of the range of one of our windows, we discard it.  It is
     * valid for a window to be added to and discarded from the working queue in one application of the work loop. */
    LinkedList<MZWindow> tbdQueueTemplate = new LinkedList<>(windows); // We can reuse this template to init the sweep.

    int spectrumCounter = 0;
    while (iter.hasNext()) {
        LCMSSpectrum spectrum = iter.next();
        float time = spectrum.getTimeVal().floatValue();

        // This will record all the m/z + intensity readings that correspond to this timepoint.  Exactly sized too!
        ByteBuffer triplesForThisTime = ByteBuffer.allocate(Long.BYTES * spectrum.getIntensities().size());

        // Batch up all the triple writes to reduce the number of times we hit the disk in this loop.
        // Note: huge success!
        RocksDBAndHandles.RocksDBWriteBatch<ColumnFamilies> writeBatch = dbAndHandles.makeWriteBatch();

        // Initialize the sweep line lists.  Windows go follow: tbd -> working -> done (nowhere).
        LinkedList<MZWindow> workingQueue = new LinkedList<>();
        LinkedList<MZWindow> tbdQueue = (LinkedList<MZWindow>) tbdQueueTemplate.clone(); // clone is in the docs, so okay!
        for (Pair<Double, Double> mzIntensity : spectrum.getIntensities()) {
            // Very important: increment the counter for every triple.  Otherwise we'll overwrite triples = Very Bad (tm).
            counter++;

            // Brevity = soul of wit!
            Double mz = mzIntensity.getLeft();
            Double intensity = mzIntensity.getRight();

            // Reset the buffers so we end up re-using the few bytes we've allocated.
            counterBuffer.clear(); // Empty (virtually).
            counterBuffer.putLong(counter);
            counterBuffer.flip(); // Prep for reading.

            valBuffer.clear(); // Empty (virtually).
            TMzI.writeToByteBuffer(valBuffer, time, mz, intensity.floatValue());
            valBuffer.flip(); // Prep for reading.

            // First, shift any applicable ranges onto the working queue based on their minimum mz.
            while (!tbdQueue.isEmpty() && tbdQueue.peekFirst().getMin() <= mz) {
                workingQueue.add(tbdQueue.pop());
            }

            // Next, remove any ranges we've passed.
            while (!workingQueue.isEmpty() && workingQueue.peekFirst().getMax() < mz) {
                workingQueue.pop(); // TODO: add() this to a recovery queue which can then become the tbdQueue.  Edge cases!
            }
            /* In the old indexed trace extractor world, we could bail here if there were no target m/z's in our window set
             * that matched with the m/z of our current mzIntensity.  However, since we're now also recording the links
             * between timepoints and their (t, m/z, i) triples, we need to keep on keepin' on regardless of whether we have
             * any m/z windows in the working set right now. */

            // The working queue should now hold only ranges that include this m/z value.  Sweep line swept!

            /* Now add this intensity to the buffers of all the windows in the working queue.  Note that since we're only
             * storing the *index* of the triple, these buffers are going to consume less space than they would if we
             * stored everything together. */
            for (MZWindow window : workingQueue) {
                // TODO: count the number of times we add intensities to each window's accumulator for MS1-style warnings.
                counterBuffer.rewind(); // Already flipped.
                mzWindowTripleBuffers[window.getIndex()] = // Must assign when calling appendOrRealloc.
                        Utils.appendOrRealloc(mzWindowTripleBuffers[window.getIndex()], counterBuffer);
            }

            // We flipped after reading, so we should be good to rewind (to be safe) and write here.
            counterBuffer.rewind();
            valBuffer.rewind();
            writeBatch.put(ColumnFamilies.ID_TO_TRIPLE, Utils.toCompactArray(counterBuffer),
                    Utils.toCompactArray(valBuffer));

            // Rewind again for another read.
            counterBuffer.rewind();
            triplesForThisTime.put(counterBuffer);
        }

        writeBatch.write();

        assert (triplesForThisTime.position() == triplesForThisTime.capacity());

        ByteBuffer timeBuffer = ByteBuffer.allocate(Float.BYTES).putFloat(time);
        timeBuffer.flip(); // Prep both bufers for reading so they can be written to the DB.
        triplesForThisTime.flip();
        dbAndHandles.put(ColumnFamilies.TIMEPOINT_TO_TRIPLES, Utils.toCompactArray(timeBuffer),
                Utils.toCompactArray(triplesForThisTime));

        timepoints.add(time);

        spectrumCounter++;
        if (spectrumCounter % 1000 == 0) {
            LOGGER.info("Extracted %d time spectra", spectrumCounter);
        }
    }
    LOGGER.info("Extracted %d total time spectra", spectrumCounter);

    // Now write all the mzWindow to triple indexes.
    RocksDBAndHandles.RocksDBWriteBatch<ColumnFamilies> writeBatch = dbAndHandles.makeWriteBatch();
    ByteBuffer idBuffer = ByteBuffer.allocate(Integer.BYTES);
    for (int i = 0; i < mzWindowTripleBuffers.length; i++) {
        idBuffer.clear();
        idBuffer.putInt(windows.get(i).getIndex());
        idBuffer.flip();

        ByteBuffer triplesBuffer = mzWindowTripleBuffers[i];
        triplesBuffer.flip(); // Prep for read.

        writeBatch.put(ColumnFamilies.WINDOW_ID_TO_TRIPLES, Utils.toCompactArray(idBuffer),
                Utils.toCompactArray(triplesBuffer));
    }
    writeBatch.write();

    dbAndHandles.put(ColumnFamilies.TIMEPOINTS, TIMEPOINTS_KEY, Utils.floatListToByteArray(timepoints));
    dbAndHandles.flush(true);
}

From source file:com.mirth.connect.donkey.server.channel.RecoveryTask.java

private Void doCall() throws Exception {
    StorageSettings storageSettings = channel.getStorageSettings();
    Long maxMessageId = null;/*from w  w  w .j  a  v  a  2 s. c  om*/
    // The number of messages that were attempted to be recovered
    long attemptedMessages = 0L;
    // The number of messages that were successfully recovered
    long recoveredMessages = 0L;

    // The buffer size for each sub-task
    int sourceBufferSize = 1;
    int unfinishedBufferSize = 10;
    int pendingBufferSize = 10;
    // The minimum message Id that can be retrieved for the next query.
    long sourceMinMessageId = 0L;
    long unfinishedMinMessageId = 0L;
    long pendingMinMessageId = 0L;
    // The completed status of each sub-task
    boolean sourceComplete = false;
    boolean unfinishedComplete = false;
    boolean pendingComplete = false;
    // The queue buffer for each sub-task
    LinkedList<ConnectorMessage> sourceConnectorMessages = new LinkedList<ConnectorMessage>();
    LinkedList<Message> unfinishedMessages = new LinkedList<Message>();
    LinkedList<Message> pendingMessages = new LinkedList<Message>();

    do {
        ThreadUtils.checkInterruptedStatus();
        DonkeyDao dao = channel.getDaoFactory().getDao();

        try {
            if (maxMessageId == null) {
                // Cache the max messageId of the channel to be used in the query below
                maxMessageId = dao.getMaxMessageId(channel.getChannelId());
            }

            if (!sourceComplete && sourceConnectorMessages.isEmpty()) {
                // Fill the buffer
                sourceConnectorMessages
                        .addAll(dao.getConnectorMessages(channel.getChannelId(), channel.getServerId(), 0,
                                Status.RECEIVED, 0, sourceBufferSize, sourceMinMessageId, maxMessageId));

                // Mark the sub-task as completed if no messages were retrieved by the query to prevent the query from running again
                if (sourceConnectorMessages.isEmpty()) {
                    sourceComplete = true;
                } else {
                    /*
                     * If the source queue is on, these messages are usually ignored. Therefore
                     * we only retrieve one of these messages until we know for sure that we'll
                     * need to recover them.
                     */
                    sourceBufferSize = 100;
                }
            }

            if (!unfinishedComplete && unfinishedMessages.isEmpty()) {
                // Fill the buffer
                unfinishedMessages.addAll(dao.getUnfinishedMessages(channel.getChannelId(),
                        channel.getServerId(), unfinishedBufferSize, unfinishedMinMessageId));

                // Mark the sub-task as completed if no messages were retrieved by the query to prevent the query from running again
                if (unfinishedMessages.isEmpty()) {
                    unfinishedComplete = true;
                }
            }

            if (!pendingComplete && pendingMessages.isEmpty()) {
                // Fill the buffer
                pendingMessages.addAll(dao.getPendingConnectorMessages(channel.getChannelId(),
                        channel.getServerId(), pendingBufferSize, pendingMinMessageId));

                // Mark the sub-task as completed if no messages were retrieved by the query to prevent the query from running again
                if (pendingMessages.isEmpty()) {
                    pendingComplete = true;
                }
            }
        } finally {
            dao.close();
        }

        // Retrieve the first message of each sub-task
        ConnectorMessage sourceConnectorMessage = sourceConnectorMessages.peekFirst();
        Message unfinishedMessage = unfinishedMessages.peekFirst();
        Message pendingMessage = pendingMessages.peekFirst();

        if (!storageSettings.isMessageRecoveryEnabled()) {
            sourceComplete = true;
            unfinishedComplete = true;
            pendingComplete = true;
            if (unfinishedMessage != null || pendingMessage != null || (sourceConnectorMessage != null
                    && channel.getSourceConnector().isRespondAfterProcessing())) {
                logger.info("Incomplete messages found for channel " + channel.getName() + " ("
                        + channel.getChannelId()
                        + ") but message storage settings do not support recovery. Skipping recovery task.");
            }
        } else {
            Long messageId = null;

            try {
                /*
                 * Perform a 3-way merge. The sub-task that has the lowest messageId will be
                 * executed first. However it is possible for the unfinishedMessage and
                 * pendingMessage to have the same messageId. In these cases the unfinished
                 * sub-task should be executed and the pending sub-task should be ignored
                 */
                if (sourceConnectorMessage != null
                        && (unfinishedMessage == null
                                || sourceConnectorMessage.getMessageId() < unfinishedMessage.getMessageId())
                        && (pendingMessage == null
                                || sourceConnectorMessage.getMessageId() < pendingMessage.getMessageId())) {
                    if (!channel.getSourceConnector().isRespondAfterProcessing() && unfinishedComplete
                            && pendingComplete) {
                        /*
                         * If the other two sub-tasks are completed already and the source queue
                         * is enabled for this channel, then there is no need to continue
                         * recovering source RECEIVED messages because they will be picked up by
                         * the source queue.
                         */
                        sourceComplete = true;
                    } else {
                        // Store the messageId so we can log it out if an exception occurs
                        messageId = sourceConnectorMessage.getMessageId();
                        // Remove the message from the buffer and update the minMessageId
                        sourceMinMessageId = sourceConnectorMessages.pollFirst().getMessageId() + 1;

                        if (attemptedMessages++ == 0) {
                            logger.info("Starting message recovery for channel " + channel.getName() + " ("
                                    + channel.getChannelId() + "). Incomplete messages found.");
                        }

                        // Execute the recovery process for this message
                        channel.process(sourceConnectorMessage, true);
                        // Use this to decrement the queue size
                        channel.getSourceQueue().decrementSize();
                        // Increment the number of successfully recovered messages
                        recoveredMessages++;
                    }
                } else if (unfinishedMessage != null && (pendingMessage == null
                        || unfinishedMessage.getMessageId() <= pendingMessage.getMessageId())) {
                    // Store the messageId so we can log it out if an exception occurs
                    messageId = unfinishedMessage.getMessageId();
                    // Remove the message from the buffer and update the minMessageId
                    unfinishedMinMessageId = unfinishedMessages.pollFirst().getMessageId() + 1;

                    // If the unfinishedMessage and pendingMessage have the same messageId, remove the pendingMessage from the buffer
                    if (pendingMessage != null
                            && unfinishedMessage.getMessageId() == pendingMessage.getMessageId()) {
                        pendingMinMessageId = pendingMessages.pollFirst().getMessageId() + 1;
                        pendingMessage = pendingMessages.peekFirst();
                    }

                    if (attemptedMessages++ == 0) {
                        logger.info("Starting message recovery for channel " + channel.getName() + " ("
                                + channel.getChannelId() + "). Incomplete messages found.");
                    }

                    // Execute the recovery process for this message
                    recoverUnfinishedMessage(unfinishedMessage);
                    // Increment the number of successfully recovered messages
                    recoveredMessages++;
                } else if (pendingMessage != null) {
                    // Store the messageId so we can log it out if an exception occurs
                    messageId = pendingMessage.getMessageId();
                    // Remove the message from the buffer and update the minMessageId
                    pendingMinMessageId = pendingMessages.pollFirst().getMessageId() + 1;

                    if (attemptedMessages++ == 0) {
                        logger.info("Starting message recovery for channel " + channel.getName() + " ("
                                + channel.getChannelId() + "). Incomplete messages found.");
                    }

                    // Execute the recovery process for this message
                    recoverPendingMessage(pendingMessage);
                    // Increment the number of successfully recovered messages
                    recoveredMessages++;
                }
            } catch (InterruptedException e) {
                // This should only occur if a halt was requested so stop the entire recovery task
                throw e;
            } catch (Exception e) {
                /*
                 * If an exception occurs we skip the message and log an error. This is to
                 * prevent one bad exception or message from locking the entire channel.
                 * 
                 * If a non-Exception gets thrown (OutofMemoryError, etc) then it will
                 * intentionally not be caught here and the recovery task will be stopped.
                 */
                logger.error("Failed to recover message " + messageId + " for channel " + channel.getName()
                        + " (" + channel.getChannelId() + "): \n" + ExceptionUtils.getStackTrace(e));
            }
        }
    } while (!unfinishedComplete || !pendingComplete || !sourceComplete);

    if (attemptedMessages > 0) {
        logger.info("Completed message recovery for channel " + channel.getName() + " ("
                + channel.getChannelId() + "). Successfully recovered " + recoveredMessages + " out of "
                + attemptedMessages + " messages.");
    }

    return null;
}