List of usage examples for java.util LinkedList pop
public E pop()
From source file:com.act.lcms.v2.fullindex.Builder.java
protected void extractTriples(Iterator<LCMSSpectrum> iter, List<MZWindow> windows) throws RocksDBException, IOException { /* Warning: this method makes heavy use of ByteBuffers to perform memory efficient collection of values and * conversion of those values into byte arrays that RocksDB can consume. If you haven't already, go read this * tutorial on ByteBuffers: http://mindprod.com/jgloss/bytebuffer.html *// w ww . jav a 2 s . com * ByteBuffers are quite low-level structures, and they use some terms you need to watch out for: * capacity: The total number of bytes in the array backing the buffer. Don't write more than this. * position: The next index in the buffer to read or write a byte. Moves with each read or write op. * limit: A mark of where the final byte in the buffer was written. Don't read past this. * The remaining() call is affected by the limit. * mark: Ignore this for now, we don't use it. (We'll always, always read buffers from 0.) * * And here are some methods that we'll use often: * clear: Set position = 0, limit = 0. Pretend the buffer is empty, and is ready for more writes. * flip: Set limit = position, then position = 0. This remembers how many bytes were written to the buffer * (as the current position), and then puts the position at the beginning. * Always call this after the write before a read. * rewind: Set position = 0. Buffer is ready for reading, but unless the limit was set we might now know how * many bytes there are to read. Always call flip() before rewind(). Can rewind many times to re-read * the buffer repeatedly. * remaining: How many bytes do we have left to read? Requires an accurate limit value to avoid garbage bytes. * reset: Don't use this. It uses the mark, which we don't need currently. * * Write/read patterns look like: * buffer.clear(); // Clear out anything already in the buffer. * buffer.put(thing1).put(thing2)... // write a bunch of stuff * buffer.flip(); // Prep for reading. Call *once*! * * while (buffer.hasRemaining()) { buffer.get(); } // Read a bunch of stuff. * buffer.rewind(); // Ready for reading again! * while (buffer.hasRemaining()) { buffer.get(); } // Etc. * buffer.reset(); // Forget what was written previously, buffer is ready for reuse. * * We use byte buffers because they're fast, efficient, and offer incredibly convenient means of serializing a * stream of primitive types to their minimal binary representations. The same operations on objects + object * streams require significantly more CPU cycles, consume more memory, and tend to be brittle (i.e. if a class * definition changes slightly, serialization may break). Since the data we're dealing with is pretty simple, we * opt for the low-level approach. */ /* Because we'll eventually use the window indices to map a mz range to a list of triples that fall within that * range, verify that all of the indices are unique. If they're not, we'll end up overwriting the data in and * corrupting the structure of the index. */ ensureUniqueMZWindowIndices(windows); // For every mz window, allocate a buffer to hold the indices of the triples that fall in that window. ByteBuffer[] mzWindowTripleBuffers = new ByteBuffer[windows.size()]; for (int i = 0; i < mzWindowTripleBuffers.length; i++) { /* Note: the mapping between these buffers and their respective mzWindows is purely positional. Specifically, * mzWindows.get(i).getIndex() != i, but mzWindowTripleBuffers[i] belongs to mzWindows.get(i). We'll map windows * indices to the contents of mzWindowTripleBuffers at the very end of this function. */ mzWindowTripleBuffers[i] = ByteBuffer.allocate(Long.BYTES * 4096); // Start with 4096 longs = 8 pages per window. } // Every TMzI gets an index which we'll use later when we're querying by m/z and time. long counter = -1; // We increment at the top of the loop. // Note: we could also write to an mmapped file and just track pointers, but then we might lose out on compression. // We allocate all the buffers strictly here, as we know how many bytes a long and a triple will take. Then reuse! ByteBuffer counterBuffer = ByteBuffer.allocate(Long.BYTES); ByteBuffer valBuffer = ByteBuffer.allocate(TMzI.BYTES); List<Float> timepoints = new ArrayList<>(2000); // We can be sloppy here, as the count is small. /* We use a sweep-line approach to scanning through the m/z windows so that we can aggregate all intensities in * one pass over the current LCMSSpectrum (this saves us one inner loop in our extraction process). The m/z * values in the LCMSSpectrum become our "critical" or "interesting points" over which we sweep our m/z ranges. * The next window in m/z order is guaranteed to be the next one we want to consider since we address the points * in m/z order as well. As soon as we've passed out of the range of one of our windows, we discard it. It is * valid for a window to be added to and discarded from the working queue in one application of the work loop. */ LinkedList<MZWindow> tbdQueueTemplate = new LinkedList<>(windows); // We can reuse this template to init the sweep. int spectrumCounter = 0; while (iter.hasNext()) { LCMSSpectrum spectrum = iter.next(); float time = spectrum.getTimeVal().floatValue(); // This will record all the m/z + intensity readings that correspond to this timepoint. Exactly sized too! ByteBuffer triplesForThisTime = ByteBuffer.allocate(Long.BYTES * spectrum.getIntensities().size()); // Batch up all the triple writes to reduce the number of times we hit the disk in this loop. // Note: huge success! RocksDBAndHandles.RocksDBWriteBatch<ColumnFamilies> writeBatch = dbAndHandles.makeWriteBatch(); // Initialize the sweep line lists. Windows go follow: tbd -> working -> done (nowhere). LinkedList<MZWindow> workingQueue = new LinkedList<>(); LinkedList<MZWindow> tbdQueue = (LinkedList<MZWindow>) tbdQueueTemplate.clone(); // clone is in the docs, so okay! for (Pair<Double, Double> mzIntensity : spectrum.getIntensities()) { // Very important: increment the counter for every triple. Otherwise we'll overwrite triples = Very Bad (tm). counter++; // Brevity = soul of wit! Double mz = mzIntensity.getLeft(); Double intensity = mzIntensity.getRight(); // Reset the buffers so we end up re-using the few bytes we've allocated. counterBuffer.clear(); // Empty (virtually). counterBuffer.putLong(counter); counterBuffer.flip(); // Prep for reading. valBuffer.clear(); // Empty (virtually). TMzI.writeToByteBuffer(valBuffer, time, mz, intensity.floatValue()); valBuffer.flip(); // Prep for reading. // First, shift any applicable ranges onto the working queue based on their minimum mz. while (!tbdQueue.isEmpty() && tbdQueue.peekFirst().getMin() <= mz) { workingQueue.add(tbdQueue.pop()); } // Next, remove any ranges we've passed. while (!workingQueue.isEmpty() && workingQueue.peekFirst().getMax() < mz) { workingQueue.pop(); // TODO: add() this to a recovery queue which can then become the tbdQueue. Edge cases! } /* In the old indexed trace extractor world, we could bail here if there were no target m/z's in our window set * that matched with the m/z of our current mzIntensity. However, since we're now also recording the links * between timepoints and their (t, m/z, i) triples, we need to keep on keepin' on regardless of whether we have * any m/z windows in the working set right now. */ // The working queue should now hold only ranges that include this m/z value. Sweep line swept! /* Now add this intensity to the buffers of all the windows in the working queue. Note that since we're only * storing the *index* of the triple, these buffers are going to consume less space than they would if we * stored everything together. */ for (MZWindow window : workingQueue) { // TODO: count the number of times we add intensities to each window's accumulator for MS1-style warnings. counterBuffer.rewind(); // Already flipped. mzWindowTripleBuffers[window.getIndex()] = // Must assign when calling appendOrRealloc. Utils.appendOrRealloc(mzWindowTripleBuffers[window.getIndex()], counterBuffer); } // We flipped after reading, so we should be good to rewind (to be safe) and write here. counterBuffer.rewind(); valBuffer.rewind(); writeBatch.put(ColumnFamilies.ID_TO_TRIPLE, Utils.toCompactArray(counterBuffer), Utils.toCompactArray(valBuffer)); // Rewind again for another read. counterBuffer.rewind(); triplesForThisTime.put(counterBuffer); } writeBatch.write(); assert (triplesForThisTime.position() == triplesForThisTime.capacity()); ByteBuffer timeBuffer = ByteBuffer.allocate(Float.BYTES).putFloat(time); timeBuffer.flip(); // Prep both bufers for reading so they can be written to the DB. triplesForThisTime.flip(); dbAndHandles.put(ColumnFamilies.TIMEPOINT_TO_TRIPLES, Utils.toCompactArray(timeBuffer), Utils.toCompactArray(triplesForThisTime)); timepoints.add(time); spectrumCounter++; if (spectrumCounter % 1000 == 0) { LOGGER.info("Extracted %d time spectra", spectrumCounter); } } LOGGER.info("Extracted %d total time spectra", spectrumCounter); // Now write all the mzWindow to triple indexes. RocksDBAndHandles.RocksDBWriteBatch<ColumnFamilies> writeBatch = dbAndHandles.makeWriteBatch(); ByteBuffer idBuffer = ByteBuffer.allocate(Integer.BYTES); for (int i = 0; i < mzWindowTripleBuffers.length; i++) { idBuffer.clear(); idBuffer.putInt(windows.get(i).getIndex()); idBuffer.flip(); ByteBuffer triplesBuffer = mzWindowTripleBuffers[i]; triplesBuffer.flip(); // Prep for read. writeBatch.put(ColumnFamilies.WINDOW_ID_TO_TRIPLES, Utils.toCompactArray(idBuffer), Utils.toCompactArray(triplesBuffer)); } writeBatch.write(); dbAndHandles.put(ColumnFamilies.TIMEPOINTS, TIMEPOINTS_KEY, Utils.floatListToByteArray(timepoints)); dbAndHandles.flush(true); }
From source file:org.nd4j.linalg.util.ArrayUtil.java
/** Convert an arbitrary-dimensional rectangular float array to flat vector.<br> * Can pass float[], float[][], float[][][], etc. *//*www . ja va2 s . c o m*/ public static float[] flattenFloatArray(Object floatArray) { if (floatArray instanceof float[]) return (float[]) floatArray; LinkedList<Object> stack = new LinkedList<>(); stack.push(floatArray); int[] shape = arrayShape(floatArray); int length = ArrayUtil.prod(shape); float[] flat = new float[length]; int count = 0; while (!stack.isEmpty()) { Object current = stack.pop(); if (current instanceof float[]) { float[] arr = (float[]) current; for (int i = 0; i < arr.length; i++) flat[count++] = arr[i]; } else if (current instanceof Object[]) { Object[] o = (Object[]) current; for (int i = o.length - 1; i >= 0; i--) stack.push(o[i]); } else throw new IllegalArgumentException("Base array is not float[]"); } if (count != flat.length) throw new IllegalArgumentException("Fewer elements than expected. Array is ragged?"); return flat; }
From source file:org.nd4j.linalg.util.ArrayUtil.java
/** Convert an arbitrary-dimensional rectangular double array to flat vector.<br> * Can pass double[], double[][], double[][][], etc. */// w w w .j a va 2s . com public static double[] flattenDoubleArray(Object doubleArray) { if (doubleArray instanceof double[]) return (double[]) doubleArray; LinkedList<Object> stack = new LinkedList<>(); stack.push(doubleArray); int[] shape = arrayShape(doubleArray); int length = ArrayUtil.prod(shape); double[] flat = new double[length]; int count = 0; while (!stack.isEmpty()) { Object current = stack.pop(); if (current instanceof double[]) { double[] arr = (double[]) current; for (int i = 0; i < arr.length; i++) flat[count++] = arr[i]; } else if (current instanceof Object[]) { Object[] o = (Object[]) current; for (int i = o.length - 1; i >= 0; i--) stack.push(o[i]); } else throw new IllegalArgumentException("Base array is not double[]"); } if (count != flat.length) throw new IllegalArgumentException("Fewer elements than expected. Array is ragged?"); return flat; }
From source file:jp.co.atware.solr.geta.GETAssocComponent.java
/** * GETAssoc?????<code>NamedList</code>??????? * /*w w w .ja va 2 s .c om*/ * @param inputStream GETAssoc?? * @return <code>NamedList</code>? * @throws FactoryConfigurationError * @throws IOException */ protected NamedList<Object> convertResult(InputStream inputStream) throws FactoryConfigurationError, IOException { NamedList<Object> result = new NamedList<Object>(); LinkedList<NamedList<Object>> stack = new LinkedList<NamedList<Object>>(); stack.push(result); try { XMLStreamReader xml = XMLInputFactory.newInstance().createXMLStreamReader(inputStream); while (xml.hasNext()) { switch (xml.getEventType()) { case XMLStreamConstants.START_ELEMENT: NamedList<Object> element = new NamedList<Object>(); stack.peek().add(xml.getName().toString(), element); stack.push(element); for (int i = 0; i < xml.getAttributeCount(); i++) { String name = xml.getAttributeName(i).toString(); String value = xml.getAttributeValue(i); ValueOf valueOf = valueTransMap.get(name); if (valueOf != null) { try { element.add(name, valueOf.toValue(value)); } catch (NumberFormatException e) { element.add(name, value); } } else { element.add(name, value); } } break; case XMLStreamConstants.END_ELEMENT: stack.pop(); break; default: break; } xml.next(); } xml.close(); } catch (XMLStreamException e) { throw new IOException(e); } LOG.debug(result.toString()); return result; }
From source file:com.oltpbenchmark.benchmarks.auctionmark.AuctionMarkProfile.java
private boolean addItem(LinkedList<ItemInfo> items, ItemInfo itemInfo) { boolean added = false; int idx = items.indexOf(itemInfo); if (idx != -1) { // HACK: Always swap existing ItemInfos with our new one, since it will // more up-to-date information ItemInfo existing = items.set(idx, itemInfo); assert (existing != null); return (true); }//from ww w. j a va 2 s. com if (itemInfo.hasCurrentPrice()) assert (itemInfo.getCurrentPrice() > 0) : "Negative current price for " + itemInfo; // If we have room, shove it right in // We'll throw it in the back because we know it hasn't been used yet if (items.size() < AuctionMarkConstants.ITEM_ID_CACHE_SIZE) { items.addLast(itemInfo); added = true; // Otherwise, we can will randomly decide whether to pop one out } else if (this.rng.nextBoolean()) { items.pop(); items.addLast(itemInfo); added = true; } return (added); }
From source file:org.eclipse.che.vfs.impl.fs.FSMountPoint.java
private void doDelete(VirtualFileImpl virtualFile, String lockToken) throws ForbiddenException, ServerException { if (virtualFile.isFolder()) { final LinkedList<VirtualFile> q = new LinkedList<>(); q.add(virtualFile);/*from ww w . ja v a2 s .co m*/ while (!q.isEmpty()) { for (VirtualFile child : doGetChildren((VirtualFileImpl) q.pop(), SERVICE_GIT_DIR_FILTER)) { // Check permission directly for current file only. // We already know parent may be deleted by current user otherwise we should not be here. if (!hasPermission((VirtualFileImpl) child, BasicPermissions.WRITE.value(), false)) { throw new ForbiddenException(String .format("Unable delete item '%s'. Operation not permitted. ", child.getPath())); } if (child.isFolder()) { q.push(child); } else if (isLocked((VirtualFileImpl) child)) { // Do not check lock token here. It checked only when remove file directly. // If folder contains locked children it may not be deleted. throw new ForbiddenException( String.format("Unable delete item '%s'. Child item '%s' is locked. ", virtualFile.getPath(), child.getPath())); } } } } // unlock file if (virtualFile.isFile()) { final FileLock fileLock = checkIsLockValidAndGet(virtualFile); if (NO_LOCK != fileLock) { doUnlock(virtualFile, fileLock, lockToken); } } // clear caches clearAclCache(); clearLockTokensCache(); clearMetadataCache(); final String path = virtualFile.getPath(); boolean isFile = virtualFile.isFile(); if (!deleteRecursive(virtualFile.getIoFile())) { LOG.error("Unable delete file {}", virtualFile.getIoFile()); throw new ServerException(String.format("Unable delete item '%s'. ", path)); } // delete ACL file final java.io.File aclFile = new java.io.File(ioRoot, toIoPath(getAclFilePath(virtualFile.getVirtualFilePath()))); if (aclFile.delete()) { if (aclFile.exists()) { LOG.error("Unable delete ACL file {}", aclFile); throw new ServerException(String.format("Unable delete item '%s'. ", path)); } } // delete metadata file final java.io.File metadataFile = new java.io.File(ioRoot, toIoPath(getMetadataFilePath(virtualFile.getVirtualFilePath()))); if (metadataFile.delete()) { if (metadataFile.exists()) { LOG.error("Unable delete file metadata {}", metadataFile); throw new ServerException(String.format("Unable delete item '%s'. ", path)); } } if (searcherProvider != null) { try { searcherProvider.getSearcher(this, true).delete(path, isFile); } catch (ServerException e) { LOG.error(e.getMessage(), e); } } }
From source file:org.eclipse.che.vfs.impl.fs.FSMountPoint.java
private void doCopy(VirtualFileImpl source, VirtualFileImpl destination) throws ServerException { try {//from w w w. j a v a 2 s . co m // First copy metadata (properties) for source. // If we do in this way and fail cause to any i/o or // other error client will see error and may try to copy again. // But if we successfully copy tree (or single file) and then // fail to copy metadata client may not try to copy again // because copy destination already exists. // NOTE: Don't copy lock and permissions, just files itself and metadata files. // Check recursively permissions of sources in case of folder // and add all item current user cannot read in skip list. java.io.FilenameFilter filter = null; if (source.isFolder()) { final LinkedList<VirtualFileImpl> skipList = new LinkedList<>(); final LinkedList<VirtualFile> q = new LinkedList<>(); q.add(source); while (!q.isEmpty()) { for (VirtualFile current : doGetChildren((VirtualFileImpl) q.pop(), SERVICE_GIT_DIR_FILTER)) { // Check permission directly for current file only. // We already know parent accessible for current user otherwise we should not be here. // Ignore item if don't have permission to read it. if (!hasPermission((VirtualFileImpl) current, BasicPermissions.READ.value(), false)) { skipList.add((VirtualFileImpl) current); } else { if (current.isFolder()) { q.add(current); } } } } if (!skipList.isEmpty()) { filter = new java.io.FilenameFilter() { @Override public boolean accept(java.io.File dir, String name) { final String testPath = dir.getAbsolutePath() + java.io.File.separatorChar + name; for (VirtualFileImpl skipFile : skipList) { if (testPath.startsWith(skipFile.getIoFile().getAbsolutePath())) { return false; } final java.io.File metadataFile = new java.io.File(ioRoot, toIoPath(getMetadataFilePath(skipFile.getVirtualFilePath()))); if (metadataFile.exists() && testPath.startsWith(metadataFile.getAbsolutePath())) { return false; } } return true; } }; } } final java.io.File sourceMetadataFile = new java.io.File(ioRoot, toIoPath(getMetadataFilePath(source.getVirtualFilePath()))); final java.io.File destinationMetadataFile = new java.io.File(ioRoot, toIoPath(getMetadataFilePath(destination.getVirtualFilePath()))); if (sourceMetadataFile.exists()) { nioCopy(sourceMetadataFile, destinationMetadataFile, filter); } nioCopy(source.getIoFile(), destination.getIoFile(), filter); if (searcherProvider != null) { try { searcherProvider.getSearcher(this, true).add(destination); } catch (ServerException e) { LOG.error(e.getMessage(), e); // just log about i/o error in index } } } catch (IOException e) { // Do nothing for file tree. Let client side decide what to do. // User may delete copied files (if any) and try copy again. String msg = String.format("Unable copy '%s' to '%s'. ", source, destination); LOG.error(msg + e.getMessage(), e); // More details in log but do not show internal error to caller. throw new ServerException(msg); } }
From source file:org.eclipse.che.vfs.impl.fs.FSMountPoint.java
ContentStream zip(VirtualFileImpl virtualFile, VirtualFileFilter filter) throws ForbiddenException, ServerException { if (!virtualFile.isFolder()) { throw new ForbiddenException( String.format("Unable export to zip. Item '%s' is not a folder. ", virtualFile.getPath())); }//from w ww . ja v a2 s. c om java.io.File zipFile = null; FileOutputStream out = null; try { zipFile = java.io.File.createTempFile("export", ".zip"); out = new FileOutputStream(zipFile); final ZipOutputStream zipOut = new ZipOutputStream(out); final LinkedList<VirtualFile> q = new LinkedList<>(); q.add(virtualFile); final int zipEntryNameTrim = virtualFile.getVirtualFilePath().length(); final byte[] buff = new byte[COPY_BUFFER_SIZE]; while (!q.isEmpty()) { for (VirtualFile current : doGetChildren((VirtualFileImpl) q.pop(), SERVICE_GIT_DIR_FILTER)) { // (1) Check filter. // (2) Check permission directly for current file only. // We already know parent accessible for current user otherwise we should not be here. // Ignore item if don't have permission to read it. if (filter.accept(current) && hasPermission((VirtualFileImpl) current, BasicPermissions.READ.value(), false)) { final String zipEntryName = current.getVirtualFilePath().subPath(zipEntryNameTrim) .toString().substring(1); if (current.isFile()) { final ZipEntry zipEntry = new ZipEntry(zipEntryName); zipOut.putNextEntry(zipEntry); InputStream in = null; final PathLockFactory.PathLock lock = pathLockFactory .getLock(current.getVirtualFilePath(), false).acquire(LOCK_FILE_TIMEOUT); try { zipEntry.setTime(virtualFile.getLastModificationDate()); in = new FileInputStream(((VirtualFileImpl) current).getIoFile()); int r; while ((r = in.read(buff)) != -1) { zipOut.write(buff, 0, r); } } finally { closeQuietly(in); lock.release(); } zipOut.closeEntry(); } else if (current.isFolder()) { final ZipEntry zipEntry = new ZipEntry(zipEntryName + '/'); zipEntry.setTime(0); zipOut.putNextEntry(zipEntry); q.add(current); zipOut.closeEntry(); } } } } closeQuietly(zipOut); final String name = virtualFile.getName() + ".zip"; return new ContentStream(name, new DeleteOnCloseFileInputStream(zipFile), ExtMediaType.APPLICATION_ZIP, zipFile.length(), new Date()); } catch (IOException | RuntimeException ioe) { if (zipFile != null) { zipFile.delete(); } throw new ServerException(ioe.getMessage(), ioe); } finally { closeQuietly(out); } }
From source file:hr.fer.spocc.regex.AbstractRegularExpression.java
protected RegularExpression<T> createParseTree(List<RegularExpressionElement> elements) { // System.out.println(">>> Parsing regexp: "+elements); /**/*from w w w . java 2 s. co m*/ * Stack which contains parts of regular expression * which are not yet used * by the operator. In addition, <code>null</code> values * can be pushed onto this stack to indicate that * the symbols to the right are grouped by the parenthesis. * */ LinkedList<RegularExpression<T>> symbolStack = new LinkedList<RegularExpression<T>>(); /** * Operator stack */ LinkedList<RegularExpressionOperator> opStack = new LinkedList<RegularExpressionOperator>(); boolean sentinelParentheses = false; // if (this.elements.get(0).getElementType() // != RegularExpressionElementType.LEFT_PARENTHESIS // || this.elements.get(elements.size()-1).getElementType() // != RegularExpressionElementType.RIGHT_PARENTHESIS) { sentinelParentheses = true; symbolStack.push(null); opStack.push(null); // } int ind = -1; Iterator<RegularExpressionElement> iter = elements.iterator(); while (iter.hasNext() || sentinelParentheses) { ++ind; RegularExpressionElement e; if (iter.hasNext()) { e = iter.next(); } else { // osiguraj dodatnu iteraciju za umjetnu zadnju ) e = RegularExpressionElements.RIGHT_PARENTHESIS; sentinelParentheses = false; } switch (e.getElementType()) { case SYMBOL: symbolStack.push(createTrivial(elements.subList(ind, ind + 1))); break; default: RegularExpressionOperator curOp = (e.getElementType() == RegularExpressionElementType.OPERATOR ? (RegularExpressionOperator) e : null); int priority = (curOp != null ? curOp.getPriority() : -1); if (e.getElementType() != RegularExpressionElementType.LEFT_PARENTHESIS) { // System.out.println("Pre-while symbolStack: "+symbolStack); while (!opStack.isEmpty() && opStack.getFirst() != null && opStack.getFirst().getPriority() >= priority && symbolStack.getFirst() != null) { RegularExpressionOperator op = opStack.pop(); int arity = op.getArity(); int elementCount = 0; // System.out.println("POP: "+op); @SuppressWarnings("unchecked") RegularExpression<T>[] operands = new RegularExpression[arity]; for (int i = arity - 1; i >= 0; --i) { if (symbolStack.isEmpty()) { throw new IllegalArgumentException("Missing ( after"); } else if (symbolStack.getFirst() == null) { throw new IllegalArgumentException("Missing operand #" + (arity - i) + " for the operator " + op + " before index " + ind); } operands[i] = symbolStack.pop(); elementCount += operands[i].size(); } RegularExpression<T> regex = createComposite(elements.subList(ind - elementCount - 1, ind), op, operands); // System.err.println(regex); // System.err.println(regex.getSubexpression(0)); symbolStack.push(regex); // System.out.println("Group: "+ // ArrayToStringUtils.toString(operands, "\n")); // System.out.println("End group"); // System.out.println("Evaluated [" + (ind-elementCount-1) // + ", " + ind + "): "+regex); // System.out.println("Symbol stack: "+symbolStack); // System.out.println("Op stack: "+opStack); // System.out.println("---"); } } if (curOp != null) { opStack.push(curOp); } else { switch (e.getElementType()) { case LEFT_PARENTHESIS: symbolStack.push(null); opStack.push(null); break; default: // ako je ) Validate.isTrue(symbolStack.size() >= 2, "Exactly one expression is expected " + "inside parentheses before index " + ind); // pop left bracket (null) from the operator stack Object nullValue = opStack.pop(); Validate.isTrue(nullValue == null); // pop left bracket (null) from the symbol stack RegularExpression<T> regex = symbolStack.pop(); nullValue = symbolStack.pop(); // check if left bracket was removed indeed // Validate.isTrue(nullValue == null, // "Expected ( at index " + (ind-regex.size()-1)); // expand the expression if parentheses are not sentinel if (sentinelParentheses) { // XXX neki drugi flag bolje // System.out.print("Expand [" // + (ind - regex.size() - 1) + ", " // + (ind + 1) + "]: "); // System.out.println("[regex size = "+regex.size() // + "]"); regex = createExpanded(regex, elements.subList(ind - regex.size() - 1, ind + 1)); // System.out.println(" -> "+regex); } // and put back the expression inside parentheses symbolStack.push(regex); } } } // end of switch // System.out.println("----- " + ind + " ----"); // System.out.println("Symbol stack: "+symbolStack); // System.out.println("Op stack: "+opStack); } //Validate.isTrue(symbolStack.size() == 1); //Validate.isTrue(opStack.isEmpty()); return symbolStack.pop(); }
From source file:org.apache.airavata.sharing.registry.server.SharingRegistryServerHandler.java
private boolean shareEntity(String domainId, String entityId, List<String> groupOrUserList, String permissionTypeId, boolean cascadePermission) throws SharingRegistryException, TException { try {/*from ww w .ja v a 2 s . c o m*/ if (permissionTypeId .equals((new PermissionTypeRepository()).getOwnerPermissionTypeIdForDomain(domainId))) { throw new SharingRegistryException( OWNER_PERMISSION_NAME + " permission cannot be assigned or removed"); } List<Sharing> sharings = new ArrayList<>(); //Adding permission for the specified users/groups for the specified entity LinkedList<Entity> temp = new LinkedList<>(); for (String userId : groupOrUserList) { Sharing sharing = new Sharing(); sharing.setPermissionTypeId(permissionTypeId); sharing.setEntityId(entityId); sharing.setGroupId(userId); sharing.setInheritedParentId(entityId); sharing.setDomainId(domainId); if (cascadePermission) { sharing.setSharingType(SharingType.DIRECT_CASCADING); } else { sharing.setSharingType(SharingType.DIRECT_NON_CASCADING); } sharing.setCreatedTime(System.currentTimeMillis()); sharing.setUpdatedTime(System.currentTimeMillis()); sharings.add(sharing); } if (cascadePermission) { //Adding permission for the specified users/groups for all child entities (new EntityRepository()).getChildEntities(domainId, entityId).stream() .forEach(e -> temp.addLast(e)); while (temp.size() > 0) { Entity entity = temp.pop(); String childEntityId = entity.entityId; for (String userId : groupOrUserList) { Sharing sharing = new Sharing(); sharing.setPermissionTypeId(permissionTypeId); sharing.setEntityId(childEntityId); sharing.setGroupId(userId); sharing.setInheritedParentId(entityId); sharing.setSharingType(SharingType.INDIRECT_CASCADING); sharing.setInheritedParentId(entityId); sharing.setDomainId(domainId); sharing.setCreatedTime(System.currentTimeMillis()); sharing.setUpdatedTime(System.currentTimeMillis()); sharings.add(sharing); (new EntityRepository()).getChildEntities(domainId, childEntityId).stream() .forEach(e -> temp.addLast(e)); } } } (new SharingRepository()).create(sharings); EntityPK entityPK = new EntityPK(); entityPK.setDomainId(domainId); entityPK.setEntityId(entityId); Entity entity = (new EntityRepository()).get(entityPK); entity.setSharedCount((new SharingRepository()).getSharedCount(domainId, entityId)); (new EntityRepository()).update(entity); return true; } catch (Throwable ex) { logger.error(ex.getMessage(), ex); throw new SharingRegistryException() .setMessage(ex.getMessage() + " Stack trace:" + ExceptionUtils.getStackTrace(ex)); } }