Example usage for org.apache.commons.lang3.tuple Pair getKey

List of usage examples for org.apache.commons.lang3.tuple Pair getKey

Introduction

In this page you can find the example usage for org.apache.commons.lang3.tuple Pair getKey.

Prototype

@Override
public final L getKey() 

Source Link

Document

Gets the key from this pair.

This method implements the Map.Entry interface returning the left element as the key.

Usage

From source file:com.linkedin.pinot.routing.builder.GeneratorBasedRoutingTableBuilder.java

@Override
public List<ServerToSegmentSetMap> computeRoutingTableFromExternalView(String tableName,
        ExternalView externalView, List<InstanceConfig> instanceConfigList) {
    // The default routing table algorithm tries to balance all available segments across all servers, so that each
    // server is hit on every query. This works fine with small clusters (say less than 20 servers) but for larger
    // clusters, this adds up to significant overhead (one request must be enqueued for each server, processed,
    // returned, deserialized, aggregated, etc.).
    ////from w  w w. ja v a  2  s  .  co m
    // For large clusters, we want to avoid hitting every server, as this also has an adverse effect on client tail
    // latency. This is due to the fact that a query cannot return until it has received a response from each server,
    // and the greater the number of servers that are hit, the more likely it is that one of the servers will be a
    // straggler (eg. due to contention for query processing threads, GC, etc.). We also want to balance the segments
    // within any given routing table so that each server in the routing table has approximately the same number of
    // segments to process.
    //
    // To do so, we have a routing table generator that generates routing tables by picking a random subset of servers.
    // With this set of servers, we check if the set of segments served by these servers is complete. If the set of
    // segments served does not cover all of the segments, we compute the list of missing segments and pick a random
    // server that serves these missing segments until we have complete coverage of all the segments.
    //
    // We then order the segments in ascending number of replicas within our server set, in order to allocate the
    // segments with fewer replicas first. This ensures that segments that are 'easier' to allocate are more likely to
    // end up on a replica with fewer segments.
    //
    // Then, we pick a random replica for each segment, iterating from fewest replicas to most replicas, inversely
    // weighted by the number of segments already assigned to that replica. This ensures that we build a routing table
    // that's as even as possible.
    //
    // The algorithm to generate a routing table is thus:
    // 1. Compute the inverse external view, a mapping of servers to segments
    // 2. For each routing table to generate:
    //   a) Pick TARGET_SERVER_COUNT_PER_QUERY distinct servers
    //   b) Check if the server set covers all the segments; if not, add additional servers until it does.
    //   c) Order the segments in our server set in ascending order of number of replicas present in our server set
    //   d) For each segment, pick a random replica with proper weighting
    //   e) Return that routing table
    //
    // Given that we can generate routing tables at will, we then generate many routing tables and use them to optimize
    // according to two criteria: the variance in workload per server for any individual table as well as the variance
    // in workload per server across all the routing tables. To do so, we generate an initial set of routing tables
    // according to a per-routing table metric and discard the worst routing tables.

    RoutingTableGenerator routingTableGenerator = buildRoutingTableGenerator();
    routingTableGenerator.init(externalView, instanceConfigList);

    PriorityQueue<Pair<Map<String, Set<String>>, Float>> topRoutingTables = new PriorityQueue<>(
            ROUTING_TABLE_COUNT, new Comparator<Pair<Map<String, Set<String>>, Float>>() {
                @Override
                public int compare(Pair<Map<String, Set<String>>, Float> left,
                        Pair<Map<String, Set<String>>, Float> right) {
                    // Float.compare sorts in ascending order and we want a max heap, so we need to return the negative of the comparison
                    return -Float.compare(left.getValue(), right.getValue());
                }
            });

    for (int i = 0; i < ROUTING_TABLE_COUNT; i++) {
        topRoutingTables.add(generateRoutingTableWithMetric(routingTableGenerator));
    }

    // Generate routing more tables and keep the ROUTING_TABLE_COUNT top ones
    for (int i = 0; i < (ROUTING_TABLE_GENERATION_COUNT - ROUTING_TABLE_COUNT); ++i) {
        Pair<Map<String, Set<String>>, Float> newRoutingTable = generateRoutingTableWithMetric(
                routingTableGenerator);
        Pair<Map<String, Set<String>>, Float> worstRoutingTable = topRoutingTables.peek();

        // If the new routing table is better than the worst one, keep it
        if (newRoutingTable.getRight() < worstRoutingTable.getRight()) {
            topRoutingTables.poll();
            topRoutingTables.add(newRoutingTable);
        }
    }

    // Return the best routing tables
    List<ServerToSegmentSetMap> routingTables = new ArrayList<>(topRoutingTables.size());
    while (!topRoutingTables.isEmpty()) {
        Pair<Map<String, Set<String>>, Float> routingTableWithMetric = topRoutingTables.poll();
        routingTables.add(new ServerToSegmentSetMap(routingTableWithMetric.getKey()));
    }

    return routingTables;
}

From source file:com.kantenkugel.kanzebot.api.command.CommandGroup.java

@Override
public boolean handleGuild(TextChannel channel, User author, Message fullMessage, String args,
        Object[] customArgs) {//from  ww w.j  a va 2s .  c  o m
    if (args.length() == 0)
        return false;
    String[] split = args.split("\\s+", 2);
    Pair<Command, ArgParser> sub = subCommands.get(split[0]);
    if (sub != null) {
        if (sub.getValue() != null) {
            ArgParser.ParserResult parserResult = sub.getValue().parseArgs(channel.getJDA(), channel, args);
            if (parserResult.getError() != null) {
                MessageUtil.sendMessage(channel,
                        parserResult.getError() + "\nUsage:\n" + sub.getKey().getUsage());
                return true;
            }
            customArgs = parserResult.getArgs();
        }
        return sub.getKey().handleGuild(channel, author, fullMessage, args, customArgs);
    } else {
        return false;
    }
}

From source file:com.uber.hoodie.utilities.deltastreamer.HoodieDeltaStreamer.java

private void sync() throws Exception {

    // Retrieve the previous round checkpoints, if any
    Optional<String> resumeCheckpointStr = Optional.empty();
    if (commitTimelineOpt.isPresent()) {
        Optional<HoodieInstant> lastCommit = commitTimelineOpt.get().lastInstant();
        if (lastCommit.isPresent()) {
            HoodieCommitMetadata commitMetadata = HoodieCommitMetadata
                    .fromBytes(commitTimelineOpt.get().getInstantDetails(lastCommit.get()).get());
            if (commitMetadata.getMetadata(CHECKPOINT_KEY) != null) {
                resumeCheckpointStr = Optional.of(commitMetadata.getMetadata(CHECKPOINT_KEY));
            } else {
                throw new HoodieDeltaStreamerException(
                        "Unable to find previous checkpoint. Please double check if this table "
                                + "was indeed built via delta streamer ");
            }//from   www .  j a  va  2 s  .c o  m
        }
    } else {
        Properties properties = new Properties();
        properties.put(HoodieWriteConfig.TABLE_NAME, cfg.targetTableName);
        HoodieTableMetaClient.initializePathAsHoodieDataset(FSUtils.getFs(), cfg.targetBasePath, properties);
    }
    log.info("Checkpoint to resume from : " + resumeCheckpointStr);

    // Pull the data from the source & prepare the write
    Pair<Optional<JavaRDD<GenericRecord>>, String> dataAndCheckpoint = source.fetchNewData(resumeCheckpointStr,
            cfg.maxInputBytes);

    if (!dataAndCheckpoint.getKey().isPresent()) {
        log.info("No new data, nothing to commit.. ");
        return;
    }

    JavaRDD<GenericRecord> avroRDD = dataAndCheckpoint.getKey().get();
    JavaRDD<HoodieRecord> records = avroRDD.map(gr -> {
        HoodieRecordPayload payload = UtilHelpers.createPayload(cfg.payloadClassName, gr,
                (Comparable) gr.get(cfg.sourceOrderingField));
        return new HoodieRecord<>(keyGenerator.getKey(gr), payload);
    });

    // Perform the write
    HoodieWriteConfig hoodieCfg = getHoodieClientConfig(cfg.hoodieClientProps);
    HoodieWriteClient client = new HoodieWriteClient<>(jssc, hoodieCfg);
    String commitTime = client.startCommit();
    log.info("Starting commit  : " + commitTime);

    JavaRDD<WriteStatus> writeStatusRDD;
    if (cfg.operation == Operation.INSERT) {
        writeStatusRDD = client.insert(records, commitTime);
    } else if (cfg.operation == Operation.UPSERT) {
        writeStatusRDD = client.upsert(records, commitTime);
    } else {
        throw new HoodieDeltaStreamerException("Unknown operation :" + cfg.operation);
    }

    // Simply commit for now. TODO(vc): Support better error handlers later on
    HashMap<String, String> checkpointCommitMetadata = new HashMap<>();
    checkpointCommitMetadata.put(CHECKPOINT_KEY, dataAndCheckpoint.getValue());

    boolean success = client.commit(commitTime, writeStatusRDD, Optional.of(checkpointCommitMetadata));
    if (success) {
        log.info("Commit " + commitTime + " successful!");
        // TODO(vc): Kick off hive sync from here.

    } else {
        log.info("Commit " + commitTime + " failed!");
    }
    client.close();
}

From source file:com.microsoft.azure.storage.queue.QueueEncryptionPolicy.java

/**
 * Return an encrypted base64 encoded message along with encryption related metadata given a plain text message.
 * /*from   w  ww . j  a  v a  2 s.c  o m*/
 * @param inputMessage
 *            The input message in bytes.
 * @return The encrypted message that will be uploaded to the service.
 * @throws StorageException
 *             An exception representing any error which occurred during the operation.
 */
String encryptMessage(byte[] inputMessage) throws StorageException {
    Utility.assertNotNull("inputMessage", inputMessage);

    if (this.keyWrapper == null) {
        throw new IllegalArgumentException(SR.KEY_MISSING);
    }

    CloudQueueEncryptedMessage encryptedMessage = new CloudQueueEncryptedMessage();
    EncryptionData encryptionData = new EncryptionData();
    encryptionData.setEncryptionAgent(new EncryptionAgent(Constants.EncryptionConstants.ENCRYPTION_PROTOCOL_V1,
            EncryptionAlgorithm.AES_CBC_256));

    try {
        KeyGenerator keyGen = KeyGenerator.getInstance("AES");
        keyGen.init(256);

        Cipher myAes = Cipher.getInstance("AES/CBC/PKCS5Padding");
        SecretKey aesKey = keyGen.generateKey();
        myAes.init(Cipher.ENCRYPT_MODE, aesKey);

        // Wrap key
        Pair<byte[], String> encryptedKey = this.keyWrapper
                .wrapKeyAsync(aesKey.getEncoded(), null /* algorithm */).get();
        encryptionData.setWrappedContentKey(new WrappedContentKey(this.keyWrapper.getKid(),
                encryptedKey.getKey(), encryptedKey.getValue()));

        encryptedMessage.setEncryptedMessageContents(
                new String(Base64.encode(myAes.doFinal(inputMessage, 0, inputMessage.length))));

        encryptionData.setContentEncryptionIV(myAes.getIV());
        encryptedMessage.setEncryptionData(encryptionData);
        return encryptedMessage.serialize();
    } catch (Exception e) {
        throw StorageException.translateClientException(e);
    }
}

From source file:hu.ppke.itk.nlpg.purepos.decoder.AbstractDecoder.java

protected List<Pair<List<Integer>, Double>> cleanResults(List<Pair<List<Integer>, Double>> tagSeqList) {
    List<Pair<List<Integer>, Double>> ret = new ArrayList<Pair<List<Integer>, Double>>();
    for (Pair<List<Integer>, Double> element : tagSeqList) {
        List<Integer> tagSeq = element.getKey();
        List<Integer> newTagSeq = tagSeq.subList(0, tagSeq.size() - 1);
        ret.add(Pair.of(newTagSeq, element.getValue()));
    }/*from   ww w  .ja va2  s  .com*/
    return ret;
}

From source file:com.streamsets.pipeline.stage.processor.hbase.HBaseLookupProcessor.java

private void handleEmptyKey(Record record, Pair<String, HBaseColumn> key) throws StageException {
    if (conf.ignoreMissingFieldPath) {
        LOG.debug(Errors.HBASE_41.getMessage(), record, key.getKey(),
                Bytes.toString(key.getValue().getCf()) + ":" + Bytes.toString(key.getValue().getQualifier()),
                key.getValue().getTimestamp());
    } else {/*from   w  w w.  ja  va2  s.  c om*/
        LOG.error(Errors.HBASE_41.getMessage(), record, key.getKey(),
                Bytes.toString(key.getValue().getCf()) + ":" + Bytes.toString(key.getValue().getQualifier()),
                key.getValue().getTimestamp());
        errorRecordHandler.onError(new OnRecordErrorException(record, Errors.HBASE_41, record, key.getKey(),
                Bytes.toString(key.getValue().getCf()) + ":" + Bytes.toString(key.getValue().getQualifier()),
                key.getValue().getTimestamp()));
    }
}

From source file:blusunrize.immersiveengineering.common.blocks.metal.TileEntityConnectorLV.java

private void notifyAvailableEnergy(int energyStored, @Nullable Set<AbstractConnection> outputs) {
    if (outputs == null)
        outputs = ImmersiveNetHandler.INSTANCE.getIndirectEnergyConnections(pos, world, true);
    for (AbstractConnection con : outputs) {
        IImmersiveConnectable end = ApiUtils.toIIC(con.end, world);
        if (con.cableType != null && end != null && end.allowEnergyToPass(null)) {
            Pair<Float, Consumer<Float>> e = getEnergyForConnection(con);
            end.addAvailableEnergy(e.getKey(), e.getValue());
        }/* ww  w . j a va  2 s  .c om*/
    }
}

From source file:com.galenframework.speclang2.reader.pagespec.PageSectionProcessor.java

private void processSectionRule(PageSection section, StructNode ruleNode) throws IOException {
    String ruleText = ruleNode.getName().substring(1).trim();

    Pair<PageRule, Map<String, String>> rule = findAndProcessRule(ruleText, ruleNode);

    PageSection ruleSection = new PageSection(ruleText);
    section.addSubSection(ruleSection);//from ww  w.j  a va2  s . c  om

    List<StructNode> resultingNodes;
    try {
        resultingNodes = rule.getKey().apply(pageSpecHandler, ruleText, NO_OBJECT_NAME, rule.getValue());
    } catch (Exception ex) {
        throw new SyntaxException(ruleNode, "Error processing custom rule", ex);
    }
    processSection(ruleSection, resultingNodes);
}

From source file:com.nextdoor.bender.operation.substitution.field.FieldSubstitution.java

@Override
protected void doSubstitution(InternalEvent ievent, DeserializedEvent devent, Map<String, Object> nested) {
    Pair<String, Object> kv;
    try {/*from w w w.  j  a va2 s  . c  o  m*/
        kv = getFieldAndSource(devent, srcFields, false);
    } catch (FieldNotFoundException e) {
        if (this.failSrcNotFound) {
            throw new OperationException(e);
        }
        return;
    }

    nested.put(this.key, kv.getValue());
    /*
     * Remove source field
     */
    if (this.removeSrcField) {
        devent.deleteField(kv.getKey());
    }
}

From source file:com.galenframework.speclang2.pagespec.PageSectionProcessor.java

private void processSectionRule(PageSection section, StructNode ruleNode) throws IOException {
    String ruleText = ruleNode.getName().substring(1).trim();

    Pair<PageRule, Map<String, String>> rule = findAndProcessRule(ruleText, ruleNode);

    PageSection ruleSection = new PageSection(ruleText, ruleNode.getPlace());
    section.addSubSection(ruleSection);//w w  w .j a  v a  2 s.  co  m

    List<StructNode> resultingNodes;
    try {
        resultingNodes = rule.getKey().apply(pageSpecHandler, ruleText, NO_OBJECT_NAME, rule.getValue(),
                ruleNode.getChildNodes());
        processSection(ruleSection, resultingNodes);
    } catch (Exception ex) {
        throw new SyntaxException(ruleNode, "Error processing rule: " + ruleText, ex);
    }
}