Example usage for java.util Collections shuffle

List of usage examples for java.util Collections shuffle

Introduction

In this page you can find the example usage for java.util Collections shuffle.

Prototype

@SuppressWarnings({ "rawtypes", "unchecked" })
public static void shuffle(List<?> list, Random rnd) 

Source Link

Document

Randomly permute the specified list using the specified source of randomness.

Usage

From source file:com.alibaba.wasp.master.balancer.DefaultLoadBalancer.java

/**
 * Generate a global load balancing plan according to the specified map of
 * server information to the most loaded entityGroups of each server.
 * /*from  ww w .  ja  v  a 2 s.  c  o  m*/
 * The load balancing invariant is that all servers are within 1 entityGroup of the
 * average number of entityGroups per server. If the average is an integer number,
 * all servers will be balanced to the average. Otherwise, all servers will
 * have either floor(average) or ceiling(average) entityGroups.
 * 
 * HBASE-3609 Modeled entityGroupsToMove using Guava's MinMaxPriorityQueue so that
 * we can fetch from both ends of the queue. At the beginning, we check
 * whether there was empty entityGroup server just discovered by Master. If so, we
 * alternately choose new / old entityGroups from head / tail of entityGroupsToMove,
 * respectively. This alternation avoids clustering young entityGroups on the newly
 * discovered entityGroup server. Otherwise, we choose new entityGroups from head of
 * entityGroupsToMove.
 * 
 * Another improvement from HBASE-3609 is that we assign entityGroups from
 * entityGroupsToMove to underloaded servers in round-robin fashion. Previously one
 * underloaded server would be filled before we move onto the next underloaded
 * server, leading to clustering of young entityGroups.
 * 
 * Finally, we randomly shuffle underloaded servers so that they receive
 * offloaded entityGroups relatively evenly across calls to balanceCluster().
 * 
 * The algorithm is currently implemented as such:
 * 
 * <ol>
 * <li>Determine the two valid numbers of entityGroups each server should have,
 * <b>MIN</b>=floor(average) and <b>MAX</b>=ceiling(average).
 * 
 * <li>Iterate down the most loaded servers, shedding entityGroups from each so
 * each server hosts exactly <b>MAX</b> entityGroups. Stop once you reach a server
 * that already has &lt;= <b>MAX</b> entityGroups.
 * <p>
 * Order the entityGroups to move from most recent to least.
 * 
 * <li>Iterate down the least loaded servers, assigning entityGroups so each server
 * has exactly </b>MIN</b> entityGroups. Stop once you reach a server that already
 * has &gt;= <b>MIN</b> entityGroups.
 * 
 * EntityGroups being assigned to underloaded servers are those that were shed in
 * the previous step. It is possible that there were not enough entityGroups shed
 * to fill each underloaded server to <b>MIN</b>. If so we end up with a
 * number of entityGroups required to do so, <b>neededEntityGroups</b>.
 * 
 * It is also possible that we were able to fill each underloaded but ended up
 * with entityGroups that were unassigned from overloaded servers but that still do
 * not have assignment.
 * 
 * If neither of these conditions hold (no entityGroups needed to fill the
 * underloaded servers, no entityGroups leftover from overloaded servers), we are
 * done and return. Otherwise we handle these cases below.
 * 
 * <li>If <b>neededEntityGroups</b> is non-zero (still have underloaded servers),
 * we iterate the most loaded servers again, shedding a single server from
 * each (this brings them from having <b>MAX</b> entityGroups to having <b>MIN</b>
 * entityGroups).
 * 
 * <li>We now definitely have more entityGroups that need assignment, either from
 * the previous step or from the original shedding from overloaded servers.
 * Iterate the least loaded servers filling each to <b>MIN</b>.
 * 
 * <li>If we still have more entityGroups that need assignment, again iterate the
 * least loaded servers, this time giving each one (filling them to
 * </b>MAX</b>) until we run out.
 * 
 * <li>All servers will now either host <b>MIN</b> or <b>MAX</b> entityGroups.
 * 
 * In addition, any server hosting &gt;= <b>MAX</b> entityGroups is guaranteed to
 * end up with <b>MAX</b> entityGroups at the end of the balancing. This ensures
 * the minimal number of entityGroups possible are moved.
 * </ol>
 * 
 * TODO: We can at-most reassign the number of entityGroups away from a particular
 * server to be how many they report as most loaded. Should we just keep all
 * assignment in memory? Any objections? Does this mean we need HeapSize on
 * HMaster? Or just careful monitor? (current thinking is we will hold all
 * assignments in memory)
 * 
 * @param clusterState Map of entityGroupservers and their load/entityGroup information
 *          to a list of their most loaded entityGroups
 * @return a list of entityGroups to be moved, including source and destination, or
 *         null if cluster is already balanced
 */
public List<EntityGroupPlan> balanceCluster(Map<ServerName, List<EntityGroupInfo>> clusterMap) {
    boolean emptyFServerPresent = false;
    long startTime = System.currentTimeMillis();

    ClusterLoadState cs = new ClusterLoadState(clusterMap);

    int numServers = cs.getNumServers();
    if (numServers == 0) {
        LOG.debug("numServers=0 so skipping load balancing");
        return null;
    }
    NavigableMap<ServerAndLoad, List<EntityGroupInfo>> serversByLoad = cs.getServersByLoad();

    int numEntityGroups = cs.getNumEntityGroups();

    if (!this.needsBalance(cs)) {
        // Skipped because no server outside (min,max) range
        float average = cs.getLoadAverage(); // for logging
        LOG.info("Skipping load balancing because balanced cluster; " + "servers=" + numServers + " "
                + "entityGroups=" + numEntityGroups + " average=" + average + " " + "mostloaded="
                + serversByLoad.lastKey().getLoad() + " leastloaded=" + serversByLoad.firstKey().getLoad());
        return null;
    }

    int min = numEntityGroups / numServers;
    int max = numEntityGroups % numServers == 0 ? min : min + 1;

    // Using to check balance result.
    StringBuilder strBalanceParam = new StringBuilder();
    strBalanceParam.append("Balance parameter: numEntityGroups=").append(numEntityGroups)
            .append(", numServers=").append(numServers).append(", max=").append(max).append(", min=")
            .append(min);
    LOG.debug(strBalanceParam.toString());

    // Balance the cluster
    // TODO: Look at data block locality or a more complex load to do this
    MinMaxPriorityQueue<EntityGroupPlan> entityGroupsToMove = MinMaxPriorityQueue.orderedBy(rpComparator)
            .create();
    List<EntityGroupPlan> entityGroupsToReturn = new ArrayList<EntityGroupPlan>();

    // Walk down most loaded, pruning each to the max
    int serversOverloaded = 0;
    // flag used to fetch entityGroups from head and tail of list, alternately
    boolean fetchFromTail = false;
    Map<ServerName, BalanceInfo> serverBalanceInfo = new TreeMap<ServerName, BalanceInfo>();
    for (Map.Entry<ServerAndLoad, List<EntityGroupInfo>> server : serversByLoad.descendingMap().entrySet()) {
        ServerAndLoad sal = server.getKey();
        int entityGroupCount = sal.getLoad();
        if (entityGroupCount <= max) {
            serverBalanceInfo.put(sal.getServerName(), new BalanceInfo(0, 0));
            break;
        }
        serversOverloaded++;
        List<EntityGroupInfo> entityGroups = server.getValue();
        int numToOffload = Math.min(entityGroupCount - max, entityGroups.size());
        // account for the out-of-band entityGroups which were assigned to this server
        // after some other entityGroup server crashed
        Collections.sort(entityGroups, riComparator);
        int numTaken = 0;
        for (int i = 0; i <= numToOffload;) {
            EntityGroupInfo egInfo = entityGroups.get(i); // fetch from head
            if (fetchFromTail) {
                egInfo = entityGroups.get(entityGroups.size() - 1 - i);
            }
            i++;
            entityGroupsToMove.add(new EntityGroupPlan(egInfo, sal.getServerName(), null));
            numTaken++;
            if (numTaken >= numToOffload)
                break;
            // fetch in alternate order if there is new entityGroup server
            if (emptyFServerPresent) {
                fetchFromTail = !fetchFromTail;
            }
        }
        serverBalanceInfo.put(sal.getServerName(), new BalanceInfo(numToOffload, (-1) * numTaken));
    }
    int totalNumMoved = entityGroupsToMove.size();

    // Walk down least loaded, filling each to the min
    int neededEntityGroups = 0; // number of entityGroups needed to bring all up to min
    fetchFromTail = false;

    Map<ServerName, Integer> underloadedServers = new HashMap<ServerName, Integer>();
    for (Map.Entry<ServerAndLoad, List<EntityGroupInfo>> server : serversByLoad.entrySet()) {
        int entityGroupCount = server.getKey().getLoad();
        if (entityGroupCount >= min) {
            break;
        }
        underloadedServers.put(server.getKey().getServerName(), min - entityGroupCount);
    }
    // number of servers that get new entityGroups
    int serversUnderloaded = underloadedServers.size();
    int incr = 1;
    List<ServerName> sns = Arrays
            .asList(underloadedServers.keySet().toArray(new ServerName[serversUnderloaded]));
    Collections.shuffle(sns, RANDOM);
    while (entityGroupsToMove.size() > 0) {
        int cnt = 0;
        int i = incr > 0 ? 0 : underloadedServers.size() - 1;
        for (; i >= 0 && i < underloadedServers.size(); i += incr) {
            if (entityGroupsToMove.isEmpty())
                break;
            ServerName si = sns.get(i);
            int numToTake = underloadedServers.get(si);
            if (numToTake == 0)
                continue;

            addEntityGroupPlan(entityGroupsToMove, fetchFromTail, si, entityGroupsToReturn);
            if (emptyFServerPresent) {
                fetchFromTail = !fetchFromTail;
            }

            underloadedServers.put(si, numToTake - 1);
            cnt++;
            BalanceInfo bi = serverBalanceInfo.get(si);
            if (bi == null) {
                bi = new BalanceInfo(0, 0);
                serverBalanceInfo.put(si, bi);
            }
            bi.setNumEntityGroupsAdded(bi.getNumEntityGroupsAdded() + 1);
        }
        if (cnt == 0)
            break;
        // iterates underloadedServers in the other direction
        incr = -incr;
    }
    for (Integer i : underloadedServers.values()) {
        // If we still want to take some, increment needed
        neededEntityGroups += i;
    }

    // If none needed to fill all to min and none left to drain all to max,
    // we are done
    if (neededEntityGroups == 0 && entityGroupsToMove.isEmpty()) {
        long endTime = System.currentTimeMillis();
        LOG.info("Calculated a load balance in " + (endTime - startTime) + "ms. " + "Moving " + totalNumMoved
                + " entityGroups off of " + serversOverloaded + " overloaded servers onto " + serversUnderloaded
                + " less loaded servers");
        return entityGroupsToReturn;
    }

    // Need to do a second pass.
    // Either more entityGroups to assign out or servers that are still underloaded

    // If we need more to fill min, grab one from each most loaded until enough
    if (neededEntityGroups != 0) {
        // Walk down most loaded, grabbing one from each until we get enough
        for (Map.Entry<ServerAndLoad, List<EntityGroupInfo>> server : serversByLoad.descendingMap()
                .entrySet()) {
            BalanceInfo balanceInfo = serverBalanceInfo.get(server.getKey().getServerName());
            int idx = balanceInfo == null ? 0 : balanceInfo.getNextEntityGroupForUnload();
            if (idx >= server.getValue().size())
                break;
            EntityGroupInfo entityGroup = server.getValue().get(idx);
            entityGroupsToMove.add(new EntityGroupPlan(entityGroup, server.getKey().getServerName(), null));
            totalNumMoved++;
            if (--neededEntityGroups == 0) {
                // No more entityGroups needed, done shedding
                break;
            }
        }
    }

    // Now we have a set of entityGroups that must be all assigned out
    // Assign each underloaded up to the min, then if leftovers, assign to max

    // Walk down least loaded, assigning to each to fill up to min
    for (Map.Entry<ServerAndLoad, List<EntityGroupInfo>> server : serversByLoad.entrySet()) {
        int entityGroupCount = server.getKey().getLoad();
        if (entityGroupCount >= min)
            break;
        BalanceInfo balanceInfo = serverBalanceInfo.get(server.getKey().getServerName());
        if (balanceInfo != null) {
            entityGroupCount += balanceInfo.getNumEntityGroupsAdded();
        }
        if (entityGroupCount >= min) {
            continue;
        }
        int numToTake = min - entityGroupCount;
        int numTaken = 0;
        while (numTaken < numToTake && 0 < entityGroupsToMove.size()) {
            addEntityGroupPlan(entityGroupsToMove, fetchFromTail, server.getKey().getServerName(),
                    entityGroupsToReturn);
            numTaken++;
            if (emptyFServerPresent) {
                fetchFromTail = !fetchFromTail;
            }
        }
    }

    // If we still have entityGroups to dish out, assign underloaded to max
    if (0 < entityGroupsToMove.size()) {
        for (Map.Entry<ServerAndLoad, List<EntityGroupInfo>> server : serversByLoad.entrySet()) {
            int entityGroupCount = server.getKey().getLoad();
            if (entityGroupCount >= max) {
                break;
            }
            addEntityGroupPlan(entityGroupsToMove, fetchFromTail, server.getKey().getServerName(),
                    entityGroupsToReturn);
            if (emptyFServerPresent) {
                fetchFromTail = !fetchFromTail;
            }
            if (entityGroupsToMove.isEmpty()) {
                break;
            }
        }
    }

    long endTime = System.currentTimeMillis();

    if (!entityGroupsToMove.isEmpty() || neededEntityGroups != 0) {
        // Emit data so can diagnose how balancer went astray.
        LOG.warn("entityGroupsToMove=" + totalNumMoved + ", numServers=" + numServers + ", serversOverloaded="
                + serversOverloaded + ", serversUnderloaded=" + serversUnderloaded);
        StringBuilder sb = new StringBuilder();
        for (Map.Entry<ServerName, List<EntityGroupInfo>> e : clusterMap.entrySet()) {
            if (sb.length() > 0)
                sb.append(", ");
            sb.append(e.getKey().toString());
            sb.append(" ");
            sb.append(e.getValue().size());
        }
        LOG.warn("Input " + sb.toString());
    }

    // All done!
    LOG.info("Done. Calculated a load balance in " + (endTime - startTime) + "ms. " + "Moving " + totalNumMoved
            + " entityGroups off of " + serversOverloaded + " overloaded servers onto " + serversUnderloaded
            + " less loaded servers");

    return entityGroupsToReturn;
}

From source file:jCloisterZone.CarcassonneEnvironment.java

@Override
protected void startState() {
    relationalWrapper_.startState();/* ww w  .j  a va  2 s.  c o m*/

    while (!ProgramArgument.EXPERIMENT_MODE.booleanValue() && client_.isRunning()) {
        try {
            Thread.yield();
        } catch (Exception e) {
        }
    }
    client_.createGame();
    earlyExit_ = false;
    earlyExitPlayers_.clear();
    prevScores_.clear();

    if (environment_ == null) {
        // Sleep only as long as it needs to to get the clientID.
        long clientID = (ProgramArgument.EXPERIMENT_MODE.booleanValue()) ? client_.getClientId() : -1;
        while (!ProgramArgument.EXPERIMENT_MODE.booleanValue() && clientID == -1) {
            try {
                Thread.yield();
                clientID = client_.getClientId();
            } catch (Exception e) {
            }
        }

        server_ = client_.getServer();
        server_.setRandomGenerator(RRLExperiment.random_);

        // Handle number of players playing
        slots_ = new ArrayList<PlayerSlot>(players_.length);
        int slotIndex = 0;
        for (String playerName : players_) {
            String playerNameIndex = playerName + slotIndex;
            if (playerName.equals(CERRLA_NAME)) {
                // Agent-controlled
                slots_.add(new PlayerSlot(slotIndex, PlayerSlot.SlotType.PLAYER, playerNameIndex, clientID));
            } else if (playerName.equals(AI_NAME)) {
                // AI controlled
                PlayerSlot slot = new PlayerSlot(slotIndex, PlayerSlot.SlotType.AI, playerNameIndex, clientID);
                slot.setAiClassName(LegacyAiPlayer.class.getName());
                slots_.add(slot);
            } else if (playerName.equals(RANDOM_NAME)) {
                // AI controlled
                PlayerSlot slot = new PlayerSlot(slotIndex, PlayerSlot.SlotType.AI, playerNameIndex, clientID);
                slot.setAiClassName(RandomAIPlayer.class.getName());
                slots_.add(slot);
            } else if (playerName.equals(HUMAN_NAME)) {
                // Human-controlled
                slots_.add(new PlayerSlot(slotIndex, PlayerSlot.SlotType.PLAYER, playerNameIndex, clientID));
            }
            slotIndex++;
        }

        // Start the game.
        environment_ = client_.getGame();
        while (!ProgramArgument.EXPERIMENT_MODE.booleanValue() && environment_ == null) {
            try {
                Thread.yield();
            } catch (Exception e) {
            }
            environment_ = client_.getGame();
        }
        relationalWrapper_.setGame(environment_);
        environment_.addGameListener(relationalWrapper_);
        // Ad-hoc fix
        if (ProgramArgument.EXPERIMENT_MODE.booleanValue())
            environment_.addUserInterface(relationalWrapper_);
        clientInterface_ = environment_.getUserInterface();
    } else if (players_.length > 1) {
        // Reset the UIs
        server_.stopGame();
        environment_.clearUserInterface();
        environment_.addUserInterface(clientInterface_);

        // Clear the slots and re-add them.
        for (int i = 0; i < PlayerSlot.COUNT; i++) {
            server_.updateSlot(new PlayerSlot(i), null);
        }
    }
    // Ad-hoc fix
    if (!ProgramArgument.EXPERIMENT_MODE.booleanValue()) {
        environment_.addUserInterface(relationalWrapper_);
    }

    // Randomise the slots
    Collections.shuffle(slots_, RRLExperiment.random_);
    for (int i = 0; i < slots_.size(); i++) {
        PlayerSlot slot = slots_.get(i);
        PlayerSlot cloneSlot = new PlayerSlot(i, slot.getType(), slot.getNick(), slot.getOwner());
        cloneSlot.setAiClassName(slot.getAiClassName());
        server_.updateSlot(cloneSlot, LegacyAiPlayer.supportedExpansions());
    }

    server_.startGame();
    // Sleep until game has started
    while (!ProgramArgument.EXPERIMENT_MODE.booleanValue() && (environment_ == null
            || environment_.getBoard() == null || environment_.getTilePack() == null)) {
        environment_ = ((ClientStub) Proxy.getInvocationHandler(server_)).getGame();
        try {
            Thread.yield();
        } catch (Exception e) {
        }
    }

    runPhases();

    currentPlayer_ = null;
}

From source file:lab4.YouQuiz.java

private void shuffleQuestions() {
    Collections.shuffle(questionsArray, new Random(System.nanoTime()));
    for (int i = 0; i < questionsArray.size(); ++i) {
        if (questionsArray.get(i).type == Question.QUESTION_TYPE_MULTIPLE_CHOICE
                || questionsArray.get(i).type == Question.QUESTION_TYPE_TRUE_FALSE) {
            ((MultipleChoiceQuestion) questionsArray.get(i)).shuffleChoices();
        }/*from  w  w  w. j  a  v  a  2  s  .c o m*/
    }
}

From source file:com.adityarathi.muo.services.AudioPlaybackService.java

/**
 * Initializes the list of pointers to each cursor row.
 *///from   www . j  a v  a  2 s.c  o  m
private void initPlaybackIndecesList(boolean playAll) {
    if (getCursor() != null && getPlaybackIndecesList() != null) {
        getPlaybackIndecesList().clear();
        for (int i = 0; i < getCursor().getCount(); i++) {
            getPlaybackIndecesList().add(i);
        }

        if (isShuffleOn() && !playAll) {
            //Build a new list that doesn't include the current song index.
            ArrayList<Integer> newList = new ArrayList<Integer>(getPlaybackIndecesList());
            newList.remove(getCurrentSongIndex());

            //Shuffle the new list.
            Collections.shuffle(newList, new Random(System.nanoTime()));

            //Plug in the current song index back into the new list.
            newList.add(getCurrentSongIndex(), getCurrentSongIndex());
            mPlaybackIndecesList = newList;

        } else if (isShuffleOn() && playAll) {
            //Shuffle all elements.
            Collections.shuffle(getPlaybackIndecesList(), new Random(System.nanoTime()));
        }

    } else {
        stopSelf();
    }

}

From source file:edu.upenn.cis.orchestra.workloadgenerator.Generator.java

/**
 * Generate an orchestra schema./*ww  w.ja v  a 2 s . c  o m*/
 * 
 * @param generation
 *            the generation number.
 */
public void generate(int generation) {
    _generation = generation;
    // generate the random schemas
    for (int i = _start; i < _end; i++) {
        List<String> pi = new ArrayList<String>(Stats.getAtts());
        Collections.shuffle(pi, _random);
        List<List<String>> schema = new ArrayList<List<String>>();
        // fixed size relation
        int j = 0;
        schema.add(new ArrayList<String>());

        for (String att : pi) {
            if (_random.nextDouble() <= (Double) _params.get("coverage")) {
                if (schema.get(j).size() == (Integer) _params.get("relsize") - 1) {
                    j++;
                    schema.add(new ArrayList<String>());
                }
                schema.get(j).add(att);
            }
        }
        if ((Boolean) _params.get("addValueAttr")) {
            for (int k = 0; k < schema.size(); k++)
                schema.get(k).add(Relation.valueAttrName);
        }
        _logicalSchemas.add(schema);
    }

    for (int i = _start; i < _end; i++) {
        _peers.add(i);
        _journal.addPeer(_generation, i, _logicalSchemas.get(i - _start));
    }

    // THIS CODE IS LEFT OVER FROM ORIGINAL VERSION
    // assign peers to schemas
    // for (int i = end; i < (Integer) _params
    // .get("peers"); i++) {
    // int j = _random.nextInt(_logicalSchemas.size());
    // _peers.add(j);
    // }

    if (null != _previousGeneration) {
        merge(_previousGeneration);
    }

    // generate mappings among the peers
    // Param determines direction of the mappings (fwd: true, bwd: false)
    switch ((Integer) _params.get("topology")) {
    case 0:
        randomTopology();
        break;

    case 1:
        topologyForDRedComparison(false);
        break;

    case 2:
        chainTopology(false);
        break;

    case 3:
        veeTopology(false);
        break;

    case 4:
        diamondTopology(false);
        break;

    case 5:
        chainVeeTopology(false);
        break;

    case 6:
        multiBranchTopology(false);
        break;

    case 7:
        //         naryTreeTopology(false, (Integer) _params.get("fanout"));
        naryTreeTopology(false, 2);
        break;

    case 8:
        naryTreeTopology(false, 3);
        break;

    case 9:
        naryTreeTopology(false, 4);
        break;

    case 10:
        naryTreeTopology(false, (Integer) _params.get("fanout"));
        break;

    case 11:
        doubleBranchTopology(false);
        break;

    default:
        randomTopology();
    }
    //      for(Object o : _mappings){
    //         System.out.println(o.toString());
    //      }
}

From source file:org.apache.solr.client.solrj.impl.CloudSolrServer.java

private NamedList directUpdate(AbstractUpdateRequest request, ClusterState clusterState)
        throws SolrServerException {
    UpdateRequest updateRequest = (UpdateRequest) request;
    ModifiableSolrParams params = (ModifiableSolrParams) request.getParams();
    ModifiableSolrParams routableParams = new ModifiableSolrParams();
    ModifiableSolrParams nonRoutableParams = new ModifiableSolrParams();

    if (params != null) {
        nonRoutableParams.add(params);/*from   w w w  .  j a va 2  s . co m*/
        routableParams.add(params);
        for (String param : NON_ROUTABLE_PARAMS) {
            routableParams.remove(param);
        }
    }

    String collection = nonRoutableParams.get(UpdateParams.COLLECTION, defaultCollection);
    if (collection == null) {
        throw new SolrServerException(
                "No collection param specified on request and no default collection has been set.");
    }

    //Check to see if the collection is an alias.
    Aliases aliases = zkStateReader.getAliases();
    if (aliases != null) {
        Map<String, String> collectionAliases = aliases.getCollectionAliasMap();
        if (collectionAliases != null && collectionAliases.containsKey(collection)) {
            collection = collectionAliases.get(collection);
        }
    }

    DocCollection col = clusterState.getCollection(collection);

    DocRouter router = col.getRouter();

    if (router instanceof ImplicitDocRouter) {
        // short circuit as optimization
        return null;
    }

    //Create the URL map, which is keyed on slice name.
    //The value is a list of URLs for each replica in the slice.
    //The first value in the list is the leader for the slice.
    Map<String, List<String>> urlMap = buildUrlMap(col);
    if (urlMap == null) {
        // we could not find a leader yet - use unoptimized general path
        return null;
    }

    NamedList<Throwable> exceptions = new NamedList<Throwable>();
    NamedList<NamedList> shardResponses = new NamedList<NamedList>();

    Map<String, LBHttpSolrServer.Req> routes = updateRequest.getRoutes(router, col, urlMap, routableParams,
            this.idField);
    if (routes == null) {
        return null;
    }

    long start = System.nanoTime();

    if (parallelUpdates) {
        final Map<String, Future<NamedList<?>>> responseFutures = new HashMap<>(routes.size());
        for (final Map.Entry<String, LBHttpSolrServer.Req> entry : routes.entrySet()) {
            final String url = entry.getKey();
            final LBHttpSolrServer.Req lbRequest = entry.getValue();
            responseFutures.put(url, threadPool.submit(new Callable<NamedList<?>>() {
                @Override
                public NamedList<?> call() throws Exception {
                    return lbServer.request(lbRequest).getResponse();
                }
            }));
        }

        for (final Map.Entry<String, Future<NamedList<?>>> entry : responseFutures.entrySet()) {
            final String url = entry.getKey();
            final Future<NamedList<?>> responseFuture = entry.getValue();
            try {
                shardResponses.add(url, responseFuture.get());
            } catch (InterruptedException e) {
                Thread.currentThread().interrupt();
                throw new RuntimeException(e);
            } catch (ExecutionException e) {
                exceptions.add(url, e.getCause());
            }
        }

        if (exceptions.size() > 0) {
            throw new RouteException(ErrorCode.SERVER_ERROR, exceptions, routes);
        }
    } else {
        for (Map.Entry<String, LBHttpSolrServer.Req> entry : routes.entrySet()) {
            String url = entry.getKey();
            LBHttpSolrServer.Req lbRequest = entry.getValue();
            try {
                NamedList rsp = lbServer.request(lbRequest).getResponse();
                shardResponses.add(url, rsp);
            } catch (Exception e) {
                throw new SolrServerException(e);
            }
        }
    }

    UpdateRequest nonRoutableRequest = null;
    List<String> deleteQuery = updateRequest.getDeleteQuery();
    if (deleteQuery != null && deleteQuery.size() > 0) {
        UpdateRequest deleteQueryRequest = new UpdateRequest();
        deleteQueryRequest.setDeleteQuery(deleteQuery);
        nonRoutableRequest = deleteQueryRequest;
    }

    Set<String> paramNames = nonRoutableParams.getParameterNames();

    Set<String> intersection = new HashSet<>(paramNames);
    intersection.retainAll(NON_ROUTABLE_PARAMS);

    if (nonRoutableRequest != null || intersection.size() > 0) {
        if (nonRoutableRequest == null) {
            nonRoutableRequest = new UpdateRequest();
        }
        nonRoutableRequest.setParams(nonRoutableParams);
        List<String> urlList = new ArrayList<>();
        urlList.addAll(routes.keySet());
        Collections.shuffle(urlList, rand);
        LBHttpSolrServer.Req req = new LBHttpSolrServer.Req(nonRoutableRequest, urlList);
        try {
            LBHttpSolrServer.Rsp rsp = lbServer.request(req);
            shardResponses.add(urlList.get(0), rsp.getResponse());
        } catch (Exception e) {
            throw new SolrException(ErrorCode.SERVER_ERROR, urlList.get(0), e);
        }
    }

    long end = System.nanoTime();

    RouteResponse rr = condenseResponse(shardResponses, (long) ((end - start) / 1000000));
    rr.setRouteResponses(shardResponses);
    rr.setRoutes(routes);
    return rr;
}

From source file:com.comcast.cdn.traffic_control.traffic_router.core.router.TrafficRouter.java

public List<InetRecord> inetRecordsFromCaches(final DeliveryService ds, final List<Cache> caches,
        final Request request) {
    final List<InetRecord> addresses = new ArrayList<InetRecord>();
    final int maxDnsIps = ds.getMaxDnsIps();
    List<Cache> selectedCaches;

    if (maxDnsIps > 0 && isConsistentDNSRouting()) { // only consistent hash if we must
        final SortedMap<Double, Cache> cacheMap = consistentHash(caches, request.getHostname());
        final Dispersion dispersion = ds.getDispersion();
        selectedCaches = dispersion.getCacheList(cacheMap);
    } else if (maxDnsIps > 0) {
        /*/*from  ww  w  .j  av  a2  s  .  c  o  m*/
         * We also shuffle in NameServer when adding Records to the Message prior
         * to sending it out, as the Records are sorted later when we fill the
         * dynamic zone if DNSSEC is enabled. We shuffle here prior to pruning
         * for maxDnsIps so that we ensure we are spreading load across all caches
         * assigned to this delivery service.
        */
        Collections.shuffle(caches, random);

        selectedCaches = new ArrayList<Cache>();

        for (final Cache cache : caches) {
            selectedCaches.add(cache);

            if (selectedCaches.size() >= maxDnsIps) {
                break;
            }
        }
    } else {
        selectedCaches = caches;
    }

    for (final Cache cache : selectedCaches) {
        addresses.addAll(cache.getIpAddresses(ds.getTtls(), zoneManager, ds.isIp6RoutingEnabled()));
    }

    return addresses;
}

From source file:com.jd.survey.web.survey.PrivateSurveyController.java

/**
 * Prepares to edit a survey page /*w  ww. jav  a2s  .c om*/
 * @param surveyId
 * @param order
 * @param uiModel
 * @param principal
 * @return
 */
@Secured({ "ROLE_ADMIN", "ROLE_SURVEY_ADMIN", "ROLE_SURVEY_PARTICIPANT" })
@RequestMapping(value = "/{id}/{po}", produces = "text/html")
public String editSurveyPage(@PathVariable("id") Long surveyId, @PathVariable("po") Short order, Model uiModel,
        Principal principal, HttpServletRequest httpServletRequest) {
    log.info("editSurveyPage surveyId=" + surveyId + " pageOrder" + order);
    try {
        String login = principal.getName();
        Survey survey = surveyService.survey_findById(surveyId);
        //Check if the user is authorized
        if (!survey.getLogin().equals(login)) {
            log.warn(UNAUTHORIZED_ATTEMPT_TO_ACCESS_SURVEY_WARNING_MESSAGE + surveyId
                    + REQUEST_PATH_WARNING_MESSAGE + httpServletRequest.getPathInfo()
                    + FROM_USER_LOGIN_WARNING_MESSAGE + principal.getName() + FROM_IP_WARNING_MESSAGE
                    + httpServletRequest.getLocalAddr());
            return "accessDenied";

        }

        //Check that the survey was not submitted
        if (!(survey.getStatus().equals(SurveyStatus.I) || survey.getStatus().equals(SurveyStatus.R))) {
            log.warn(UNAUTHORIZED_ATTEMPT_TO_EDIT_SUBMITTED_SURVEY_WARNING_MESSAGE + surveyId
                    + REQUEST_PATH_WARNING_MESSAGE + httpServletRequest.getPathInfo()
                    + FROM_USER_LOGIN_WARNING_MESSAGE + principal.getName() + FROM_IP_WARNING_MESSAGE
                    + httpServletRequest.getLocalAddr());
            return "accessDenied";

        }

        SurveyPage surveyPage = surveyService.surveyPage_get(surveyId, order,
                messageSource.getMessage(DATE_FORMAT, null, LocaleContextHolder.getLocale()));

        //randomize the questions order
        if (surveyPage.getRandomizeQuestions()) {
            Collections.shuffle(surveyPage.getQuestionAnswers(), new Random(System.nanoTime()));
        }

        //randomize the questions options orders
        for (QuestionAnswer questionAnswer : surveyPage.getQuestionAnswers()) {
            questionAnswer.getQuestion()
                    .setOptionsList(new ArrayList<QuestionOption>(questionAnswer.getQuestion().getOptions()));
            if (questionAnswer.getQuestion().getRandomizeOptions()) {
                Collections.shuffle(questionAnswer.getQuestion().getOptionsList(),
                        new Random(System.nanoTime()));
            }
        }

        List<SurveyPage> surveyPages = surveyService.surveyPage_getAll(surveyId,
                messageSource.getMessage(DATE_FORMAT, null, LocaleContextHolder.getLocale()));
        uiModel.addAttribute("survey_base_path", "private");
        uiModel.addAttribute("survey", surveyPage.getSurvey());
        uiModel.addAttribute("surveyDefinition",
                surveySettingsService.surveyDefinition_findById(surveyPage.getSurvey().getTypeId()));

        uiModel.addAttribute("surveyPage", surveyPage);
        uiModel.addAttribute("surveyPages", surveyPages);
        return "surveys/page";
    } catch (Exception e) {
        log.error(e.getMessage(), e);
        throw (new RuntimeException(e));
    }
}

From source file:com.jd.survey.web.survey.PublicSurveyController.java

/**
 * Prepares the edit survey page/* w ww .ja  v a  2s . c  o m*/
 * @param surveyId
 * @param order
 * @param uiModel
 * @param principal
 * @return
 */
@RequestMapping(value = "/{id}/{po}", produces = "text/html")
public String editSurveyPage(@PathVariable("id") Long surveyId, @PathVariable("po") Short order, Model uiModel,
        HttpServletRequest httpServletRequest) {
    log.info("editSurveyPage surveyId=" + surveyId + " pageOrder" + order);
    try {
        SurveyPage surveyPage = surveyService.surveyPage_get(surveyId, order,
                messageSource.getMessage(DATE_FORMAT, null, LocaleContextHolder.getLocale()));
        SurveyDefinition surveyDefinition = surveySettingsService
                .surveyDefinition_findById(surveyPage.getSurvey().getTypeId());

        //randomize the questions order
        if (surveyPage.getRandomizeQuestions()) {
            Collections.shuffle(surveyPage.getQuestionAnswers(), new Random(System.nanoTime()));
        }

        //randomize the questions options orders
        for (QuestionAnswer questionAnswer : surveyPage.getQuestionAnswers()) {
            questionAnswer.getQuestion()
                    .setOptionsList(new ArrayList<QuestionOption>(questionAnswer.getQuestion().getOptions()));
            if (questionAnswer.getQuestion().getRandomizeOptions()) {
                Collections.shuffle(questionAnswer.getQuestion().getOptionsList(),
                        new Random(System.nanoTime()));
            }
        }

        //survey definition not open to the public
        if (!surveyDefinition.getIsPublic()) {
            log.warn(SURVEY_NOT_PUBLIC_WARNING_MESSAGE + httpServletRequest.getPathInfo()
                    + FROM_IP_WARNING_MESSAGE + httpServletRequest.getLocalAddr());
            return "accessDenied";
        }
        //Attempt to access a survey from different IP Address
        if (!surveyPage.getSurvey().getIpAddress().equalsIgnoreCase(httpServletRequest.getLocalAddr())) {
            log.warn(UNAUTHORIZED_ATTEMPT_TO_ACCESS_SURVEY_WARNING_MESSAGE + httpServletRequest.getPathInfo()
                    + FROM_IP_WARNING_MESSAGE + httpServletRequest.getLocalAddr());
            return "accessDenied";
        }

        List<SurveyPage> surveyPages = surveyService.surveyPage_getAll(surveyId,
                messageSource.getMessage(DATE_FORMAT, null, LocaleContextHolder.getLocale()));
        uiModel.addAttribute("survey_base_path", "open");
        uiModel.addAttribute("survey", surveyPage.getSurvey());
        uiModel.addAttribute("surveyPage", surveyPage);
        uiModel.addAttribute("surveyDefinition",
                surveySettingsService.surveyDefinition_findById(surveyPage.getSurvey().getTypeId()));
        uiModel.addAttribute("surveyPages", surveyPages);
        return "surveys/page";
    } catch (Exception e) {
        log.error(e.getMessage(), e);
        throw (new RuntimeException(e));
    }
}

From source file:br.msf.commons.util.CollectionUtils.java

public static void shuffle(final List<?> list, final Random random) {
    Collections.shuffle(list, random);
}