Example usage for org.apache.commons.lang3.tuple Pair getValue

List of usage examples for org.apache.commons.lang3.tuple Pair getValue

Introduction

In this page you can find the example usage for org.apache.commons.lang3.tuple Pair getValue.

Prototype

@Override
public R getValue() 

Source Link

Document

Gets the value from this pair.

This method implements the Map.Entry interface returning the right element as the value.

Usage

From source file:com.spotify.heroic.cluster.CoreClusterManager.java

@Override
public List<ClusterShard> useOptionalGroup(final Optional<String> group) {
    final ImmutableList.Builder<ClusterShard> shards = ImmutableList.builder();

    for (final Pair<Map<String, String>, List<ClusterNode>> e : findFromAllShards()) {
        shards.add(new ClusterShard(async, e.getKey(),
                ImmutableList.copyOf(e.getValue().stream().map(c -> c.useOptionalGroup(group)).iterator())));
    }//w  w w .j a v  a2  s . c  o  m

    return shards.build();
}

From source file:com.garethahealy.quotalimitsgenerator.cli.parsers.YamlTemplateProcessor.java

public void process(QuotaLimitModel quotaLimitModel) throws IOException, TemplateException {
    Map<String, QuotaLimitModel> root = new HashMap<String, QuotaLimitModel>();
    root.put("model", quotaLimitModel);

    if (!new File(quotaLimitModel.getOutputPath().toString()).mkdirs()) {
        throw new IOException("Failed to create directory for; " + quotaLimitModel.getOutputPath().toString());
    }//w w w.j  a v  a  2s .  c om

    for (Pair<String, Template> current : templates) {
        LOG.info("{}/{}.yaml", quotaLimitModel.getOutputPath().toString(), current.getKey());

        File yamlFile = new File(quotaLimitModel.getOutputPath().toString() + "/" + current.getKey() + ".yaml");
        if (!yamlFile.createNewFile()) {
            throw new IOException("Failed to create file for; " + current.getKey());
        }

        OutputStreamWriter writer = null;
        try {
            writer = new OutputStreamWriter(new FileOutputStream(yamlFile), Charset.forName("UTF-8"));
            current.getValue().process(root, writer);
        } finally {
            if (writer != null) {
                writer.close();
            }
        }
    }
}

From source file:com.linkedin.pinot.routing.builder.KafkaLowLevelConsumerRoutingTableBuilder.java

@Override
public List<ServerToSegmentSetMap> computeRoutingTableFromExternalView(String tableName,
        ExternalView externalView, List<InstanceConfig> instanceConfigList) {
    // We build the routing table based off the external view here. What we want to do is to make sure that we uphold
    // the guarantees clients expect (no duplicate records, eventual consistency) and spreading the load as equally as
    // possible between the servers.
    ////from w w w.  j ava2  s.  c  om
    // Each Kafka partition contains a fraction of the data, so we need to make sure that we query all partitions.
    // Because in certain unlikely degenerate scenarios, we can consume overlapping data until segments are flushed (at
    // which point the overlapping data is discarded during the reconciliation process with the controller), we need to
    // ensure that the query that is sent has only one partition in CONSUMING state in order to avoid duplicate records.
    //
    // Because we also want to want to spread the load as equally as possible between servers, we use a weighted random
    // replica selection that favors picking replicas with fewer segments assigned to them, thus having an approximately
    // equal distribution of load between servers.
    //
    // For example, given three replicas with 1, 2 and 3 segments assigned to each, the replica with one segment should
    // have a weight of 2, which is the maximum segment count minus the segment count for that replica. Thus, each
    // replica other than the replica(s) with the maximum segment count should have a chance of getting a segment
    // assigned to it. This corresponds to alternative three below:
    //
    // Alternative 1 (weight is sum of segment counts - segment count in that replica):
    // (6 - 1) = 5 -> P(0.4166)
    // (6 - 2) = 4 -> P(0.3333)
    // (6 - 3) = 3 -> P(0.2500)
    //
    // Alternative 2 (weight is max of segment counts - segment count in that replica + 1):
    // (3 - 1) + 1 = 3 -> P(0.5000)
    // (3 - 2) + 1 = 2 -> P(0.3333)
    // (3 - 3) + 1 = 1 -> P(0.1666)
    //
    // Alternative 3 (weight is max of segment counts - segment count in that replica):
    // (3 - 1) = 2 -> P(0.6666)
    // (3 - 2) = 1 -> P(0.3333)
    // (3 - 3) = 0 -> P(0.0000)
    //
    // Of those three weighting alternatives, the third one has the smallest standard deviation of the number of
    // segments assigned per replica, so it corresponds to the weighting strategy used for segment assignment. Empirical
    // testing shows that for 20 segments and three replicas, the standard deviation of each alternative is respectively
    // 2.112, 1.496 and 0.853.
    //
    // This algorithm works as follows:
    // 1. Gather all segments and group them by Kafka partition, sorted by sequence number
    // 2. Ensure that for each partition, we have at most one partition in consuming state
    // 3. Sort all the segments to be used during assignment in ascending order of replicas
    // 4. For each segment to be used during assignment, pick a random replica, weighted by the number of segments
    //    assigned to each replica.

    // 1. Gather all segments and group them by Kafka partition, sorted by sequence number
    Map<String, SortedSet<SegmentName>> sortedSegmentsByKafkaPartition = new HashMap<String, SortedSet<SegmentName>>();
    for (String helixPartitionName : externalView.getPartitionSet()) {
        // Ignore segments that are not low level consumer segments
        if (!SegmentNameBuilder.Realtime.isRealtimeV2Name(helixPartitionName)) {
            continue;
        }

        final LLCSegmentName segmentName = new LLCSegmentName(helixPartitionName);
        String kafkaPartitionName = segmentName.getPartitionRange();
        SortedSet<SegmentName> segmentsForPartition = sortedSegmentsByKafkaPartition.get(kafkaPartitionName);

        // Create sorted set if necessary
        if (segmentsForPartition == null) {
            segmentsForPartition = new TreeSet<SegmentName>();

            sortedSegmentsByKafkaPartition.put(kafkaPartitionName, segmentsForPartition);
        }

        segmentsForPartition.add(segmentName);
    }

    // 2. Ensure that for each Kafka partition, we have at most one Helix partition (Pinot segment) in consuming state
    Map<String, SegmentName> allowedSegmentInConsumingStateByKafkaPartition = new HashMap<String, SegmentName>();
    for (String kafkaPartition : sortedSegmentsByKafkaPartition.keySet()) {
        SortedSet<SegmentName> sortedSegmentsForKafkaPartition = sortedSegmentsByKafkaPartition
                .get(kafkaPartition);
        SegmentName lastAllowedSegmentInConsumingState = null;

        for (SegmentName segmentName : sortedSegmentsForKafkaPartition) {
            Map<String, String> helixPartitionState = externalView.getStateMap(segmentName.getSegmentName());
            boolean allInConsumingState = true;
            int replicasInConsumingState = 0;

            // Only keep the segment if all replicas have it in CONSUMING state
            for (String externalViewState : helixPartitionState.values()) {
                // Ignore ERROR state
                if (externalViewState.equalsIgnoreCase(
                        CommonConstants.Helix.StateModel.RealtimeSegmentOnlineOfflineStateModel.ERROR)) {
                    continue;
                }

                // Not all segments are in CONSUMING state, therefore don't consider the last segment assignable to CONSUMING
                // replicas
                if (externalViewState.equalsIgnoreCase(
                        CommonConstants.Helix.StateModel.RealtimeSegmentOnlineOfflineStateModel.ONLINE)) {
                    allInConsumingState = false;
                    break;
                }

                // Otherwise count the replica as being in CONSUMING state
                if (externalViewState.equalsIgnoreCase(
                        CommonConstants.Helix.StateModel.RealtimeSegmentOnlineOfflineStateModel.CONSUMING)) {
                    replicasInConsumingState++;
                }
            }

            // If all replicas have this segment in consuming state (and not all of them are in ERROR state), then pick this
            // segment to be the last allowed segment to be in CONSUMING state
            if (allInConsumingState && 0 < replicasInConsumingState) {
                lastAllowedSegmentInConsumingState = segmentName;
                break;
            }
        }

        if (lastAllowedSegmentInConsumingState != null) {
            allowedSegmentInConsumingStateByKafkaPartition.put(kafkaPartition,
                    lastAllowedSegmentInConsumingState);
        }
    }

    // 3. Sort all the segments to be used during assignment in ascending order of replicas

    // PriorityQueue throws IllegalArgumentException when given a size of zero
    int segmentCount = Math.max(externalView.getPartitionSet().size(), 1);
    PriorityQueue<Pair<String, Set<String>>> segmentToReplicaSetQueue = new PriorityQueue<Pair<String, Set<String>>>(
            segmentCount, new Comparator<Pair<String, Set<String>>>() {
                @Override
                public int compare(Pair<String, Set<String>> firstPair, Pair<String, Set<String>> secondPair) {
                    return Integer.compare(firstPair.getRight().size(), secondPair.getRight().size());
                }
            });
    RoutingTableInstancePruner instancePruner = new RoutingTableInstancePruner(instanceConfigList);

    for (Map.Entry<String, SortedSet<SegmentName>> entry : sortedSegmentsByKafkaPartition.entrySet()) {
        String kafkaPartition = entry.getKey();
        SortedSet<SegmentName> segmentNames = entry.getValue();

        // The only segment name which is allowed to be in CONSUMING state or null
        SegmentName validConsumingSegment = allowedSegmentInConsumingStateByKafkaPartition.get(kafkaPartition);

        for (SegmentName segmentName : segmentNames) {
            Set<String> validReplicas = new HashSet<String>();
            Map<String, String> externalViewState = externalView.getStateMap(segmentName.getSegmentName());

            for (Map.Entry<String, String> instanceAndStateEntry : externalViewState.entrySet()) {
                String instance = instanceAndStateEntry.getKey();
                String state = instanceAndStateEntry.getValue();

                // Skip pruned replicas (shutting down or otherwise disabled)
                if (instancePruner.isInactive(instance)) {
                    continue;
                }

                // Replicas in ONLINE state are always allowed
                if (state.equalsIgnoreCase(
                        CommonConstants.Helix.StateModel.RealtimeSegmentOnlineOfflineStateModel.ONLINE)) {
                    validReplicas.add(instance);
                    continue;
                }

                // Replicas in CONSUMING state are only allowed on the last segment
                if (state.equalsIgnoreCase(
                        CommonConstants.Helix.StateModel.RealtimeSegmentOnlineOfflineStateModel.CONSUMING)
                        && segmentName.equals(validConsumingSegment)) {
                    validReplicas.add(instance);
                }
            }

            segmentToReplicaSetQueue
                    .add(new ImmutablePair<String, Set<String>>(segmentName.getSegmentName(), validReplicas));

            // If this segment is the segment allowed in CONSUMING state, don't process segments after it in that Kafka
            // partition
            if (segmentName.equals(validConsumingSegment)) {
                break;
            }
        }
    }

    // 4. For each segment to be used during assignment, pick a random replica, weighted by the number of segments
    //    assigned to each replica.
    List<ServerToSegmentSetMap> routingTables = new ArrayList<ServerToSegmentSetMap>(routingTableCount);
    for (int i = 0; i < routingTableCount; ++i) {
        Map<String, Set<String>> instanceToSegmentSetMap = new HashMap<String, Set<String>>();

        PriorityQueue<Pair<String, Set<String>>> segmentToReplicaSetQueueCopy = new PriorityQueue<Pair<String, Set<String>>>(
                segmentToReplicaSetQueue);

        while (!segmentToReplicaSetQueueCopy.isEmpty()) {
            Pair<String, Set<String>> segmentAndValidReplicaSet = segmentToReplicaSetQueueCopy.poll();
            String segment = segmentAndValidReplicaSet.getKey();
            Set<String> validReplicaSet = segmentAndValidReplicaSet.getValue();

            String replica = pickWeightedRandomReplica(validReplicaSet, instanceToSegmentSetMap);
            if (replica != null) {
                Set<String> segmentsForInstance = instanceToSegmentSetMap.get(replica);

                if (segmentsForInstance == null) {
                    segmentsForInstance = new HashSet<String>();
                    instanceToSegmentSetMap.put(replica, segmentsForInstance);
                }

                segmentsForInstance.add(segment);
            }
        }

        routingTables.add(new ServerToSegmentSetMap(instanceToSegmentSetMap));
    }

    return routingTables;
}

From source file:com.streamsets.pipeline.stage.processor.hbase.HBaseLookupProcessor.java

private Set<Pair<String, HBaseColumn>> getKeyColumnListMap(Batch batch) throws StageException {
    Iterator<Record> records;
    records = batch.getRecords();/*www  . java 2  s . co  m*/
    Record record;
    Set<Pair<String, HBaseColumn>> keyList = new HashSet<>();
    while (records.hasNext()) {
        record = records.next();
        for (HBaseLookupParameterConfig parameters : conf.lookups) {
            Pair<String, HBaseColumn> key = getKey(record, parameters);
            if (key != null && !key.getKey().trim().isEmpty()) {
                keyList.add(key);
            } else {
                LOG.debug("No key on Record '{}' with key:'{}', column:'{}', timestamp:'{}'", record,
                        key.getKey(), Bytes.toString(key.getValue().getCf()) + ":"
                                + Bytes.toString(key.getValue().getQualifier()),
                        key.getValue().getTimestamp());
            }
        }
    }
    return keyList;
}

From source file:eu.openminted.registry.service.generate.WorkflowOutputMetadataGenerate.java

protected PersonInfo generatePersonInfo(String userId, boolean addOMTDPersonId)
        throws JsonParseException, JsonMappingException, IOException {
    PersonInfo personInfo = new PersonInfo();

    // Retrieve user information from aai service
    int coId = aaiUserInfoRetriever.getCoId(userId);
    Pair<String, String> userNames = aaiUserInfoRetriever.getSurnameGivenName(coId);
    String surname = userNames.getKey();
    String givenName = userNames.getValue();
    String email = aaiUserInfoRetriever.getEmail(coId);

    // User's name
    personInfo.setSurname(surname);//from www .  j a  v  a  2  s. co  m
    personInfo.setGivenName(givenName);

    if (addOMTDPersonId) {
        // Identifiers
        List<PersonIdentifier> personIdentifiers = new ArrayList<>();
        PersonIdentifier personID = new PersonIdentifier();
        personID.setValue(userId);
        personID.setPersonIdentifierSchemeName(PersonIdentifierSchemeNameEnum.OTHER);
        personIdentifiers.add(personID);
        personInfo.setPersonIdentifiers(personIdentifiers);
    }

    // User's communication info
    CommunicationInfo communicationInfo = new CommunicationInfo();
    List<String> emails = new ArrayList<>();
    emails.add(email);
    communicationInfo.setEmails(emails);
    personInfo.setCommunicationInfo(communicationInfo);
    logger.info("Person info as retrieved from aai :: " + mapper.writeValueAsString(personInfo));
    return personInfo;

}

From source file:com.github.jknack.handlebars.cache.ConcurrentMapTemplateCache.java

/**
 * Get/Parse a template source.//from w w  w.j  av a 2s  .  c om
 *
 * @param source The template source.
 * @param parser The parser.
 * @return A Handlebars template.
 * @throws IOException If we can't read input.
 */
private Template cacheGet(final TemplateSource source, final Parser parser) throws IOException {
    Pair<TemplateSource, Template> entry = cache.get(source);
    if (entry == null) {
        logger.debug("Loading: {}", source);
        entry = Pair.of(source, parser.parse(source));
        cache.put(source, entry);
    } else if (source.lastModified() != entry.getKey().lastModified()) {
        // remove current entry.
        evict(source);
        logger.debug("Reloading: {}", source);
        entry = Pair.of(source, parser.parse(source));
        cache.put(source, entry);
    } else {
        logger.debug("Found in cache: {}", source);
    }
    return entry.getValue();
}

From source file:io.pravega.controller.store.stream.AbstractStreamMetadataStore.java

protected AbstractStreamMetadataStore(HostIndex hostIndex) {
    cache = CacheBuilder.newBuilder().maximumSize(10000).refreshAfterWrite(10, TimeUnit.MINUTES)
            .expireAfterWrite(10, TimeUnit.MINUTES).build(new CacheLoader<Pair<String, String>, Stream>() {
                @Override//from www  . j  a  va  2s  . c om
                @ParametersAreNonnullByDefault
                public Stream load(Pair<String, String> input) {
                    try {
                        return newStream(input.getKey(), input.getValue());
                    } catch (Exception e) {
                        throw new RuntimeException(e);
                    }
                }
            });

    scopeCache = CacheBuilder.newBuilder().maximumSize(1000).refreshAfterWrite(10, TimeUnit.MINUTES)
            .expireAfterWrite(10, TimeUnit.MINUTES).build(new CacheLoader<String, Scope>() {
                @Override
                @ParametersAreNonnullByDefault
                public Scope load(String scopeName) {
                    try {
                        return newScope(scopeName);
                    } catch (Exception e) {
                        throw new RuntimeException(e);
                    }
                }
            });

    this.hostIndex = hostIndex;
}

From source file:com.streamsets.pipeline.stage.processor.hbase.HBaseLookupProcessor.java

private void updateRecord(Record record, HBaseLookupParameterConfig parameter, Pair<String, HBaseColumn> key,
        Optional<String> value) throws JSONException {
    // If the value does not exists in HBase, no updates on record
    if (value == null || !value.isPresent()) {
        LOG.debug("No value found on Record '{}' with key:'{}', column:'{}', timestamp:'{}'", record,
                key.getKey(),//from   w  ww .j  a v a2 s  . c  o m
                Bytes.toString(key.getValue().getCf()) + ":" + Bytes.toString(key.getValue().getQualifier()),
                key.getValue().getTimestamp());
        return;
    }

    // if the Column Expression is empty, return the row data ( columnName + value )
    if (parameter.columnExpr.isEmpty()) {
        JSONObject json = new JSONObject(value.get());
        Iterator<String> iter = json.keys();
        Map<String, Field> columnMap = new HashMap<>();
        while (iter.hasNext()) {
            String columnName = iter.next();
            String columnValue = json.get(columnName).toString();
            columnMap.put(columnName, Field.create(columnValue));
        }
        record.set(parameter.outputFieldPath, Field.create(columnMap));
    } else {
        record.set(parameter.outputFieldPath, Field.create(value.get()));
    }
}

From source file:alfio.controller.ReservationFlowIntegrationTest.java

@Before
public void ensureConfiguration() {

    IntegrationTestUtil.ensureMinimalConfiguration(configurationRepository);
    List<TicketCategoryModification> categories = Collections.singletonList(new TicketCategoryModification(null,
            "default", AVAILABLE_SEATS, new DateTimeModification(LocalDate.now().minusDays(1), LocalTime.now()),
            new DateTimeModification(LocalDate.now().plusDays(1), LocalTime.now()), DESCRIPTION, BigDecimal.TEN,
            false, "", false, null, null, null, null, null));
    Pair<Event, String> eventAndUser = initEvent(categories, organizationRepository, userManager, eventManager,
            eventRepository);//from  ww  w. ja v a 2 s  .c o m

    event = eventAndUser.getKey();
    user = eventAndUser.getValue() + "_owner";

    //
    TemplateManager templateManager = Mockito.mock(TemplateManager.class);
    reservationApiController = new ReservationApiController(eventRepository, ticketHelper, templateManager,
            i18nManager, euVatChecker, ticketReservationRepository, ticketReservationManager);
}

From source file:com.spotify.heroic.metric.bigtable.BigtableBackend.java

private <T extends Metric> AsyncFuture<WriteMetric> writeBatch(final String columnFamily, final Series series,
        final BigtableDataClient client, final List<T> batch, final Function<T, ByteString> serializer)
        throws IOException {
    // common case for consumers
    if (batch.size() == 1) {
        return writeOne(columnFamily, series, client, batch.get(0), serializer).onFinished(written::mark);
    }//from w w w  .j a v  a  2  s  . c om

    final List<Pair<RowKey, Mutations>> saved = new ArrayList<>();
    final Map<RowKey, Mutations.Builder> building = new HashMap<>();

    for (final T d : batch) {
        final long timestamp = d.getTimestamp();
        final long base = base(timestamp);
        final long offset = offset(timestamp);

        final RowKey rowKey = new RowKey(series, base);

        Mutations.Builder builder = building.get(rowKey);

        final ByteString offsetBytes = serializeOffset(offset);
        final ByteString valueBytes = serializer.apply(d);

        if (builder == null) {
            builder = Mutations.builder();
            building.put(rowKey, builder);
        }

        builder.setCell(columnFamily, offsetBytes, valueBytes);

        if (builder.size() >= MAX_BATCH_SIZE) {
            saved.add(Pair.of(rowKey, builder.build()));
            building.put(rowKey, Mutations.builder());
        }
    }

    final ImmutableList.Builder<AsyncFuture<WriteMetric>> writes = ImmutableList.builder();

    final RequestTimer<WriteMetric> timer = WriteMetric.timer();

    for (final Pair<RowKey, Mutations> e : saved) {
        final ByteString rowKeyBytes = serialize(e.getKey(), rowKeySerializer);
        writes.add(client.mutateRow(table, rowKeyBytes, e.getValue()).directTransform(result -> timer.end()));
    }

    for (final Map.Entry<RowKey, Mutations.Builder> e : building.entrySet()) {
        final ByteString rowKeyBytes = serialize(e.getKey(), rowKeySerializer);
        writes.add(client.mutateRow(table, rowKeyBytes, e.getValue().build())
                .directTransform(result -> timer.end()));
    }

    return async.collect(writes.build(), WriteMetric.reduce());
}