Example usage for java.sql Connection TRANSACTION_READ_COMMITTED

List of usage examples for java.sql Connection TRANSACTION_READ_COMMITTED

Introduction

In this page you can find the example usage for java.sql Connection TRANSACTION_READ_COMMITTED.

Prototype

int TRANSACTION_READ_COMMITTED

To view the source code for java.sql Connection TRANSACTION_READ_COMMITTED.

Click Source Link

Document

A constant indicating that dirty reads are prevented; non-repeatable reads and phantom reads can occur.

Usage

From source file:org.apache.hadoop.hive.metastore.txn.TxnHandler.java

/**
 * Concurrency/isolation notes://  w  ww.ja va  2s  .  c om
 * This is mutexed with {@link #openTxns(OpenTxnRequest)} and other {@link #commitTxn(CommitTxnRequest)}
 * operations using select4update on NEXT_TXN_ID.  Also, mutexes on TXNX table for specific txnid:X
 * see more notes below.
 * In order to prevent lost updates, we need to determine if any 2 transactions overlap.  Each txn
 * is viewed as an interval [M,N]. M is the txnid and N is taken from the same NEXT_TXN_ID sequence
 * so that we can compare commit time of txn T with start time of txn S.  This sequence can be thought of
 * as a logical time counter.  If S.commitTime < T.startTime, T and S do NOT overlap.
 *
 * Motivating example:
 * Suppose we have multi-statment transactions T and S both of which are attempting x = x + 1
 * In order to prevent lost update problem, the the non-overlapping txns must lock in the snapshot
 * that they read appropriately.  In particular, if txns do not overlap, then one follows the other
 * (assumig they write the same entity), and thus the 2nd must see changes of the 1st.  We ensure
 * this by locking in snapshot after 
 * {@link #openTxns(OpenTxnRequest)} call is made (see {@link org.apache.hadoop.hive.ql.Driver#acquireLocksAndOpenTxn()})
 * and mutexing openTxn() with commit().  In other words, once a S.commit() starts we must ensure
 * that txn T which will be considered a later txn, locks in a snapshot that includes the result
 * of S's commit (assuming no other txns).
 * As a counter example, suppose we have S[3,3] and T[4,4] (commitId=txnid means no other transactions
 * were running in parallel).  If T and S both locked in the same snapshot (for example commit of
 * txnid:2, which is possible if commitTxn() and openTxnx() is not mutexed)
 * 'x' would be updated to the same value by both, i.e. lost update. 
 */
@Override
@RetrySemantics.Idempotent("No-op if already committed")
public void commitTxn(CommitTxnRequest rqst) throws NoSuchTxnException, TxnAbortedException, MetaException {
    long txnid = rqst.getTxnid();
    try {
        Connection dbConn = null;
        Statement stmt = null;
        ResultSet lockHandle = null;
        ResultSet commitIdRs = null, rs;
        try {
            lockInternal();
            dbConn = getDbConn(Connection.TRANSACTION_READ_COMMITTED);
            stmt = dbConn.createStatement();
            /**
             * Runs at READ_COMMITTED with S4U on TXNS row for "txnid".  S4U ensures that no other
             * operation can change this txn (such acquiring locks). While lock() and commitTxn()
             * should not normally run concurrently (for same txn) but could due to bugs in the client
             * which could then corrupt internal transaction manager state.  Also competes with abortTxn().
             */
            lockHandle = lockTransactionRecord(stmt, txnid, TXN_OPEN);
            if (lockHandle == null) {
                //if here, txn was not found (in expected state)
                TxnStatus actualTxnStatus = findTxnState(txnid, stmt);
                if (actualTxnStatus == TxnStatus.COMMITTED) {
                    /**
                     * This makes the operation idempotent
                     * (assume that this is most likely due to retry logic)
                     */
                    LOG.info("Nth commitTxn(" + JavaUtils.txnIdToString(txnid) + ") msg");
                    return;
                }
                raiseTxnUnexpectedState(actualTxnStatus, txnid);
                shouldNeverHappen(txnid);
                //dbConn is rolled back in finally{}
            }
            String conflictSQLSuffix = "from TXN_COMPONENTS where tc_txnid=" + txnid
                    + " and tc_operation_type IN(" + quoteChar(OpertaionType.UPDATE.sqlConst) + ","
                    + quoteChar(OpertaionType.DELETE.sqlConst) + ")";
            rs = stmt.executeQuery(sqlGenerator.addLimitClause(1, "tc_operation_type " + conflictSQLSuffix));
            if (rs.next()) {
                close(rs);
                //if here it means currently committing txn performed update/delete and we should check WW conflict
                /**
                 * This S4U will mutex with other commitTxn() and openTxns(). 
                 * -1 below makes txn intervals look like [3,3] [4,4] if all txns are serial
                 * Note: it's possible to have several txns have the same commit id.  Suppose 3 txns start
                 * at the same time and no new txns start until all 3 commit.
                 * We could've incremented the sequence for commitId is well but it doesn't add anything functionally.
                 */
                commitIdRs = stmt
                        .executeQuery(sqlGenerator.addForUpdateClause("select ntxn_next - 1 from NEXT_TXN_ID"));
                if (!commitIdRs.next()) {
                    throw new IllegalStateException("No rows found in NEXT_TXN_ID");
                }
                long commitId = commitIdRs.getLong(1);
                Savepoint undoWriteSetForCurrentTxn = dbConn.setSavepoint();
                /**
                 * "select distinct" is used below because
                 * 1. once we get to multi-statement txns, we only care to record that something was updated once
                 * 2. if {@link #addDynamicPartitions(AddDynamicPartitions)} is retried by caller it my create
                 *  duplicate entries in TXN_COMPONENTS
                 * but we want to add a PK on WRITE_SET which won't have unique rows w/o this distinct
                 * even if it includes all of it's columns
                 */
                int numCompsWritten = stmt.executeUpdate(
                        "insert into WRITE_SET (ws_database, ws_table, ws_partition, ws_txnid, ws_commit_id, ws_operation_type)"
                                + " select distinct tc_database, tc_table, tc_partition, tc_txnid, " + commitId
                                + ", tc_operation_type " + conflictSQLSuffix);
                /**
                 * see if there are any overlapping txns wrote the same element, i.e. have a conflict
                 * Since entire commit operation is mutexed wrt other start/commit ops,
                 * committed.ws_commit_id <= current.ws_commit_id for all txns
                 * thus if committed.ws_commit_id < current.ws_txnid, transactions do NOT overlap
                 * For example, [17,20] is committed, [6,80] is being committed right now - these overlap
                 * [17,20] committed and [21,21] committing now - these do not overlap.
                 * [17,18] committed and [18,19] committing now - these overlap  (here 18 started while 17 was still running)
                 */
                rs = stmt.executeQuery(sqlGenerator.addLimitClause(1,
                        "committed.ws_txnid, committed.ws_commit_id, committed.ws_database,"
                                + "committed.ws_table, committed.ws_partition, cur.ws_commit_id cur_ws_commit_id, "
                                + "cur.ws_operation_type cur_op, committed.ws_operation_type committed_op "
                                + "from WRITE_SET committed INNER JOIN WRITE_SET cur "
                                + "ON committed.ws_database=cur.ws_database and committed.ws_table=cur.ws_table "
                                +
                                //For partitioned table we always track writes at partition level (never at table)
                                //and for non partitioned - always at table level, thus the same table should never
                                //have entries with partition key and w/o
                                "and (committed.ws_partition=cur.ws_partition or (committed.ws_partition is null and cur.ws_partition is null)) "
                                + "where cur.ws_txnid <= committed.ws_commit_id" + //txns overlap; could replace ws_txnid
                                // with txnid, though any decent DB should infer this
                                " and cur.ws_txnid=" + txnid + //make sure RHS of join only has rows we just inserted as
                                // part of this commitTxn() op
                                " and committed.ws_txnid <> " + txnid + //and LHS only has committed txns
                                //U+U and U+D is a conflict but D+D is not and we don't currently track I in WRITE_SET at all
                                " and (committed.ws_operation_type=" + quoteChar(OpertaionType.UPDATE.sqlConst)
                                + " OR cur.ws_operation_type=" + quoteChar(OpertaionType.UPDATE.sqlConst)
                                + ")"));
                if (rs.next()) {
                    //found a conflict
                    String committedTxn = "[" + JavaUtils.txnIdToString(rs.getLong(1)) + "," + rs.getLong(2)
                            + "]";
                    StringBuilder resource = new StringBuilder(rs.getString(3)).append("/")
                            .append(rs.getString(4));
                    String partitionName = rs.getString(5);
                    if (partitionName != null) {
                        resource.append('/').append(partitionName);
                    }
                    String msg = "Aborting [" + JavaUtils.txnIdToString(txnid) + "," + rs.getLong(6) + "]"
                            + " due to a write conflict on " + resource + " committed by " + committedTxn + " "
                            + rs.getString(7) + "/" + rs.getString(8);
                    close(rs);
                    //remove WRITE_SET info for current txn since it's about to abort
                    dbConn.rollback(undoWriteSetForCurrentTxn);
                    LOG.info(msg);
                    //todo: should make abortTxns() write something into TXNS.TXN_META_INFO about this
                    if (abortTxns(dbConn, Collections.singletonList(txnid), true) != 1) {
                        throw new IllegalStateException(msg + " FAILED!");
                    }
                    dbConn.commit();
                    close(null, stmt, dbConn);
                    throw new TxnAbortedException(msg);
                } else {
                    //no conflicting operations, proceed with the rest of commit sequence
                }
            } else {
                /**
                 * current txn didn't update/delete anything (may have inserted), so just proceed with commit
                 *
                 * We only care about commit id for write txns, so for RO (when supported) txns we don't
                 * have to mutex on NEXT_TXN_ID.
                 * Consider: if RO txn is after a W txn, then RO's openTxns() will be mutexed with W's
                 * commitTxn() because both do S4U on NEXT_TXN_ID and thus RO will see result of W txn.
                 * If RO < W, then there is no reads-from relationship.
                 */
            }
            // Move the record from txn_components into completed_txn_components so that the compactor
            // knows where to look to compact.
            String s = "insert into COMPLETED_TXN_COMPONENTS select tc_txnid, tc_database, tc_table, "
                    + "tc_partition from TXN_COMPONENTS where tc_txnid = " + txnid;
            LOG.debug("Going to execute insert <" + s + ">");
            int modCount = 0;
            if ((modCount = stmt.executeUpdate(s)) < 1) {
                //this can be reasonable for an empty txn START/COMMIT or read-only txn
                //also an IUD with DP that didn't match any rows.
                LOG.info("Expected to move at least one record from txn_components to "
                        + "completed_txn_components when committing txn! " + JavaUtils.txnIdToString(txnid));
            }
            s = "delete from TXN_COMPONENTS where tc_txnid = " + txnid;
            LOG.debug("Going to execute update <" + s + ">");
            modCount = stmt.executeUpdate(s);
            s = "delete from HIVE_LOCKS where hl_txnid = " + txnid;
            LOG.debug("Going to execute update <" + s + ">");
            modCount = stmt.executeUpdate(s);
            s = "delete from TXNS where txn_id = " + txnid;
            LOG.debug("Going to execute update <" + s + ">");
            modCount = stmt.executeUpdate(s);
            LOG.debug("Going to commit");
            dbConn.commit();
        } catch (SQLException e) {
            LOG.debug("Going to rollback");
            rollbackDBConn(dbConn);
            checkRetryable(dbConn, e, "commitTxn(" + rqst + ")");
            throw new MetaException(
                    "Unable to update transaction database " + StringUtils.stringifyException(e));
        } finally {
            close(commitIdRs);
            close(lockHandle, stmt, dbConn);
            unlockInternal();
        }
    } catch (RetryException e) {
        commitTxn(rqst);
    }
}

From source file:massbank.DatabaseManager.java

private HashMap<String, String> getDatabaseOfAccessions() {
    GetConfig config = new GetConfig(MassBankEnv.get(MassBankEnv.KEY_BASE_URL));
    String[] dbNames = config.getDbName();
    HashMap<String, String> dbMapping = new HashMap<String, String>();
    Connection con = null;//ww  w .j a  v  a2  s .  c  o  m
    try {
        Class.forName(driver);
        con = DriverManager.getConnection(connectUrl, user, password);
        con.setAutoCommit(false);
        con.setTransactionIsolation(java.sql.Connection.TRANSACTION_READ_COMMITTED);
        for (String db : dbNames) {
            String sql = "SELECT ACCESSION FROM " + db + ".RECORD";
            PreparedStatement stmnt = con.prepareStatement(sql);
            ResultSet resultSet = stmnt.executeQuery();
            while (resultSet.next()) {
                dbMapping.put(resultSet.getString("ACCESSION"), db);
            }
            resultSet.close();
        }
    } catch (Exception e) {
        e.printStackTrace();
    } finally {
        if (con != null)
            try {
                con.close();
            } catch (SQLException e) {
                e.printStackTrace();
            }
    }
    return dbMapping;
}

From source file:com.alibaba.wasp.jdbc.result.JdbcDatabaseMetaData.java

/**
 * Returns the default transaction isolation level.
 * /*  w w w  .ja  va 2 s .  c o m*/
 * @return Connection.TRANSACTION_READ_COMMITTED
 */
public int getDefaultTransactionIsolation() {
    return Connection.TRANSACTION_READ_COMMITTED;
}

From source file:com.cloud.utils.db.TransactionLegacy.java

@SuppressWarnings({ "rawtypes", "unchecked" })
public static void initDataSource(Properties dbProps) {
    try {/*  w  w w  .j  a va2  s.com*/
        if (dbProps.size() == 0)
            return;

        s_dbHAEnabled = Boolean.valueOf(dbProps.getProperty("db.ha.enabled"));
        s_logger.info("Is Data Base High Availiability enabled? Ans : " + s_dbHAEnabled);
        String loadBalanceStrategy = dbProps.getProperty("db.ha.loadBalanceStrategy");
        // FIXME:  If params are missing...default them????
        final int cloudMaxActive = Integer.parseInt(dbProps.getProperty("db.cloud.maxActive"));
        final int cloudMaxIdle = Integer.parseInt(dbProps.getProperty("db.cloud.maxIdle"));
        final long cloudMaxWait = Long.parseLong(dbProps.getProperty("db.cloud.maxWait"));
        final String cloudUsername = dbProps.getProperty("db.cloud.username");
        final String cloudPassword = dbProps.getProperty("db.cloud.password");
        final String cloudHost = dbProps.getProperty("db.cloud.host");
        final String cloudDriver = dbProps.getProperty("db.cloud.driver");
        final int cloudPort = Integer.parseInt(dbProps.getProperty("db.cloud.port"));
        final String cloudDbName = dbProps.getProperty("db.cloud.name");
        final boolean cloudAutoReconnect = Boolean.parseBoolean(dbProps.getProperty("db.cloud.autoReconnect"));
        final String cloudValidationQuery = dbProps.getProperty("db.cloud.validationQuery");
        final String cloudIsolationLevel = dbProps.getProperty("db.cloud.isolation.level");

        int isolationLevel = Connection.TRANSACTION_READ_COMMITTED;
        if (cloudIsolationLevel == null) {
            isolationLevel = Connection.TRANSACTION_READ_COMMITTED;
        } else if (cloudIsolationLevel.equalsIgnoreCase("readcommitted")) {
            isolationLevel = Connection.TRANSACTION_READ_COMMITTED;
        } else if (cloudIsolationLevel.equalsIgnoreCase("repeatableread")) {
            isolationLevel = Connection.TRANSACTION_REPEATABLE_READ;
        } else if (cloudIsolationLevel.equalsIgnoreCase("serializable")) {
            isolationLevel = Connection.TRANSACTION_SERIALIZABLE;
        } else if (cloudIsolationLevel.equalsIgnoreCase("readuncommitted")) {
            isolationLevel = Connection.TRANSACTION_READ_UNCOMMITTED;
        } else {
            s_logger.warn("Unknown isolation level " + cloudIsolationLevel + ".  Using read uncommitted");
        }

        final boolean cloudTestOnBorrow = Boolean.parseBoolean(dbProps.getProperty("db.cloud.testOnBorrow"));
        final boolean cloudTestWhileIdle = Boolean.parseBoolean(dbProps.getProperty("db.cloud.testWhileIdle"));
        final long cloudTimeBtwEvictionRunsMillis = Long
                .parseLong(dbProps.getProperty("db.cloud.timeBetweenEvictionRunsMillis"));
        final long cloudMinEvcitableIdleTimeMillis = Long
                .parseLong(dbProps.getProperty("db.cloud.minEvictableIdleTimeMillis"));
        final boolean cloudPoolPreparedStatements = Boolean
                .parseBoolean(dbProps.getProperty("db.cloud.poolPreparedStatements"));
        final String url = dbProps.getProperty("db.cloud.url.params");

        String cloudDbHAParams = null;
        String cloudSlaves = null;
        if (s_dbHAEnabled) {
            cloudDbHAParams = getDBHAParams("cloud", dbProps);
            cloudSlaves = dbProps.getProperty("db.cloud.slaves");
            s_logger.info("The slaves configured for Cloud Data base is/are : " + cloudSlaves);
        }

        final boolean useSSL = Boolean.parseBoolean(dbProps.getProperty("db.cloud.useSSL"));
        if (useSSL) {
            System.setProperty("javax.net.ssl.keyStore", dbProps.getProperty("db.cloud.keyStore"));
            System.setProperty("javax.net.ssl.keyStorePassword",
                    dbProps.getProperty("db.cloud.keyStorePassword"));
            System.setProperty("javax.net.ssl.trustStore", dbProps.getProperty("db.cloud.trustStore"));
            System.setProperty("javax.net.ssl.trustStorePassword",
                    dbProps.getProperty("db.cloud.trustStorePassword"));
        }

        final GenericObjectPool cloudConnectionPool = new GenericObjectPool(null, cloudMaxActive,
                GenericObjectPool.DEFAULT_WHEN_EXHAUSTED_ACTION, cloudMaxWait, cloudMaxIdle, cloudTestOnBorrow,
                false, cloudTimeBtwEvictionRunsMillis, 1, cloudMinEvcitableIdleTimeMillis, cloudTestWhileIdle);

        final String cloudConnectionUri = cloudDriver + "://" + cloudHost
                + (s_dbHAEnabled ? "," + cloudSlaves : "") + ":" + cloudPort + "/" + cloudDbName
                + "?autoReconnect=" + cloudAutoReconnect + (url != null ? "&" + url : "")
                + (useSSL ? "&useSSL=true" : "") + (s_dbHAEnabled ? "&" + cloudDbHAParams : "")
                + (s_dbHAEnabled ? "&loadBalanceStrategy=" + loadBalanceStrategy : "");
        DriverLoader.loadDriver(cloudDriver);

        final ConnectionFactory cloudConnectionFactory = new DriverManagerConnectionFactory(cloudConnectionUri,
                cloudUsername, cloudPassword);

        final KeyedObjectPoolFactory poolableObjFactory = (cloudPoolPreparedStatements
                ? new StackKeyedObjectPoolFactory()
                : null);

        final PoolableConnectionFactory cloudPoolableConnectionFactory = new PoolableConnectionFactory(
                cloudConnectionFactory, cloudConnectionPool, poolableObjFactory, cloudValidationQuery, false,
                false, isolationLevel);

        // Default Data Source for CloudStack
        s_ds = new PoolingDataSource(cloudPoolableConnectionFactory.getPool());

        // Configure the usage db
        final int usageMaxActive = Integer.parseInt(dbProps.getProperty("db.usage.maxActive"));
        final int usageMaxIdle = Integer.parseInt(dbProps.getProperty("db.usage.maxIdle"));
        final long usageMaxWait = Long.parseLong(dbProps.getProperty("db.usage.maxWait"));
        final String usageUsername = dbProps.getProperty("db.usage.username");
        final String usagePassword = dbProps.getProperty("db.usage.password");
        final String usageHost = dbProps.getProperty("db.usage.host");
        final String usageDriver = dbProps.getProperty("db.usage.driver");
        final int usagePort = Integer.parseInt(dbProps.getProperty("db.usage.port"));
        final String usageDbName = dbProps.getProperty("db.usage.name");
        final boolean usageAutoReconnect = Boolean.parseBoolean(dbProps.getProperty("db.usage.autoReconnect"));
        final String usageUrl = dbProps.getProperty("db.usage.url.params");

        final GenericObjectPool usageConnectionPool = new GenericObjectPool(null, usageMaxActive,
                GenericObjectPool.DEFAULT_WHEN_EXHAUSTED_ACTION, usageMaxWait, usageMaxIdle);

        final String usageConnectionUri = usageDriver + "://" + usageHost
                + (s_dbHAEnabled ? "," + dbProps.getProperty("db.cloud.slaves") : "") + ":" + usagePort + "/"
                + usageDbName + "?autoReconnect=" + usageAutoReconnect
                + (usageUrl != null ? "&" + usageUrl : "")
                + (s_dbHAEnabled ? "&" + getDBHAParams("usage", dbProps) : "")
                + (s_dbHAEnabled ? "&loadBalanceStrategy=" + loadBalanceStrategy : "");
        DriverLoader.loadDriver(usageDriver);

        final ConnectionFactory usageConnectionFactory = new DriverManagerConnectionFactory(usageConnectionUri,
                usageUsername, usagePassword);

        final PoolableConnectionFactory usagePoolableConnectionFactory = new PoolableConnectionFactory(
                usageConnectionFactory, usageConnectionPool, new StackKeyedObjectPoolFactory(), null, false,
                false);

        // Data Source for usage server
        s_usageDS = new PoolingDataSource(usagePoolableConnectionFactory.getPool());

        try {
            // Configure the simulator db
            final int simulatorMaxActive = Integer.parseInt(dbProps.getProperty("db.simulator.maxActive"));
            final int simulatorMaxIdle = Integer.parseInt(dbProps.getProperty("db.simulator.maxIdle"));
            final long simulatorMaxWait = Long.parseLong(dbProps.getProperty("db.simulator.maxWait"));
            final String simulatorUsername = dbProps.getProperty("db.simulator.username");
            final String simulatorPassword = dbProps.getProperty("db.simulator.password");
            final String simulatorHost = dbProps.getProperty("db.simulator.host");
            final String simulatorDriver = dbProps.getProperty("db.simulator.driver");
            final int simulatorPort = Integer.parseInt(dbProps.getProperty("db.simulator.port"));
            final String simulatorDbName = dbProps.getProperty("db.simulator.name");
            final boolean simulatorAutoReconnect = Boolean
                    .parseBoolean(dbProps.getProperty("db.simulator.autoReconnect"));

            final GenericObjectPool simulatorConnectionPool = new GenericObjectPool(null, simulatorMaxActive,
                    GenericObjectPool.DEFAULT_WHEN_EXHAUSTED_ACTION, simulatorMaxWait, simulatorMaxIdle);

            final String simulatorConnectionUri = simulatorDriver + "://" + simulatorHost + ":" + simulatorPort
                    + "/" + simulatorDbName + "?autoReconnect=" + simulatorAutoReconnect;
            DriverLoader.loadDriver(simulatorDriver);

            final ConnectionFactory simulatorConnectionFactory = new DriverManagerConnectionFactory(
                    simulatorConnectionUri, simulatorUsername, simulatorPassword);

            final PoolableConnectionFactory simulatorPoolableConnectionFactory = new PoolableConnectionFactory(
                    simulatorConnectionFactory, simulatorConnectionPool, new StackKeyedObjectPoolFactory(),
                    null, false, false);
            s_simulatorDS = new PoolingDataSource(simulatorPoolableConnectionFactory.getPool());
        } catch (Exception e) {
            s_logger.debug("Simulator DB properties are not available. Not initializing simulator DS");
        }
    } catch (final Exception e) {
        s_ds = getDefaultDataSource("cloud");
        s_usageDS = getDefaultDataSource("cloud_usage");
        s_simulatorDS = getDefaultDataSource("cloud_simulator");
        s_logger.warn(
                "Unable to load db configuration, using defaults with 5 connections. Falling back on assumed datasource on localhost:3306 using username:password=cloud:cloud. Please check your configuration",
                e);
    }
}

From source file:org.wso2.carbon.registry.core.jdbc.dao.JDBCResourceDAO.java

public String[] getChildren(CollectionImpl collection, int start, int pageLen,
        DataAccessManager dataAccessManager) throws RegistryException {
    String[] childPaths = null;// w  w  w  .  j  a  v  a  2s.c  o  m

    if (Transaction.isStarted()) {
        childPaths = getChildren(collection, start, pageLen, JDBCDatabaseTransaction.getConnection());
    } else {
        Connection conn = null;
        boolean transactionSucceeded = false;
        try {
            if (!(dataAccessManager instanceof JDBCDataAccessManager)) {
                String msg = "Failed to get children. Invalid data access manager.";
                log.error(msg);
                throw new RegistryException(msg);
            }
            conn = ((JDBCDataAccessManager) dataAccessManager).getDataSource().getConnection();

            // If a managed connection already exists, use that instead of a new
            // connection.
            JDBCDatabaseTransaction.ManagedRegistryConnection temp = JDBCDatabaseTransaction
                    .getManagedRegistryConnection(conn);
            if (temp != null) {
                conn.close();
                conn = temp;
            }
            if (conn.getTransactionIsolation() != Connection.TRANSACTION_READ_COMMITTED) {
                conn.setTransactionIsolation(Connection.TRANSACTION_READ_COMMITTED);
            }
            conn.setAutoCommit(false);

            childPaths = getChildren(collection, start, pageLen, conn);
            transactionSucceeded = true;
        } catch (SQLException e) {

            String msg = "Failed to get the child paths " + pageLen + " child paths from " + start
                    + " of resource " + collection.getPath() + ". " + e.getMessage();
            log.error(msg, e);
            throw new RegistryException(msg, e);

        } finally {
            if (transactionSucceeded) {
                try {
                    conn.commit();
                } catch (SQLException e) {
                    log.error("Failed to commit the database connection used in "
                            + "getting child paths of the collection " + collection.getPath());
                }
            } else if (conn != null) {
                try {
                    conn.rollback();
                } catch (SQLException e) {
                    log.error("Failed to rollback the database connection used in "
                            + "getting child paths of the collection " + collection.getPath());
                }
            }
            if (conn != null) {
                try {
                    conn.close();
                } catch (SQLException e) {
                    log.error("Failed to close the database connection used in "
                            + "getting child paths of the collection " + collection.getPath());
                }
            }
        }
    }
    return childPaths;
}

From source file:org.wso2.carbon.repository.core.jdbc.dao.JDBCResourceDAO.java

public String[] getChildren(CollectionImpl collection, int start, int pageLen,
        DataAccessManager dataAccessManager) throws RepositoryException {
    String[] childPaths = null;// ww  w  . j  a  va2 s.c  om

    if (Transaction.isStarted()) {
        childPaths = getChildren(collection, start, pageLen, JDBCDatabaseTransaction.getConnection());
    } else {
        Connection conn = null;
        boolean transactionSucceeded = false;
        try {
            if (!(dataAccessManager instanceof JDBCDataAccessManager)) {
                String msg = "Failed to get children. Invalid data access manager.";
                log.error(msg);
                throw new RepositoryDBException(msg);
            }
            conn = ((JDBCDataAccessManager) dataAccessManager).getDataSource().getConnection();

            // If a managed connection already exists, use that instead of a new
            // connection.
            JDBCDatabaseTransaction.ManagedRegistryConnection temp = JDBCDatabaseTransaction
                    .getManagedRegistryConnection(conn);

            if (temp != null) {
                conn.close();
                conn = temp;
            }

            if (conn.getTransactionIsolation() != Connection.TRANSACTION_READ_COMMITTED) {
                conn.setTransactionIsolation(Connection.TRANSACTION_READ_COMMITTED);
            }

            conn.setAutoCommit(false);

            childPaths = getChildren(collection, start, pageLen, conn);
            transactionSucceeded = true;
        } catch (SQLException e) {

            String msg = "Failed to get the child paths " + pageLen + " child paths from " + start
                    + " of resource " + collection.getPath() + ". " + e.getMessage();
            log.error(msg, e);
            throw new RepositoryDBException(msg, e);

        } finally {
            if (transactionSucceeded) {
                try {
                    conn.commit();
                } catch (SQLException e) {
                    log.error("Failed to commit the database connection used in "
                            + "getting child paths of the collection " + collection.getPath());
                }
            } else if (conn != null) {
                try {
                    conn.rollback();
                } catch (SQLException e) {
                    log.error("Failed to rollback the database connection used in "
                            + "getting child paths of the collection " + collection.getPath());
                }
            }
            if (conn != null) {
                try {
                    conn.close();
                } catch (SQLException e) {
                    log.error("Failed to close the database connection used in "
                            + "getting child paths of the collection " + collection.getPath());
                }
            }
        }
    }
    return childPaths;
}

From source file:com.nextep.designer.dbgm.services.impl.DataService.java

private Connection getRepositoryConnection() throws SQLException {
    final IConnection repoConn = repositoryService.getRepositoryConnection();
    final Connection jdbcConn = connectionService.connect(repoConn);
    jdbcConn.setTransactionIsolation(Connection.TRANSACTION_READ_COMMITTED);
    return jdbcConn;
}

From source file:org.wso2.carbon.user.core.jdbc.JDBCUserStoreManager.java

/**
 * @return//from  ww  w  . j a  v a2  s. co  m
 * @throws SQLException
 * @throws UserStoreException
 */
protected Connection getDBConnection() throws SQLException, UserStoreException {
    Connection dbConnection = getJDBCDataSource().getConnection();
    dbConnection.setAutoCommit(false);
    if (dbConnection.getTransactionIsolation() != Connection.TRANSACTION_READ_COMMITTED) {
        dbConnection.setTransactionIsolation(Connection.TRANSACTION_READ_COMMITTED);
    }
    return dbConnection;
}

From source file:org.apache.hadoop.hive.metastore.txn.TxnHandler.java

@Override
@RetrySemantics.SafeToRetry//from w  ww  . ja v a  2 s. c  o  m
public void performWriteSetGC() {
    Connection dbConn = null;
    Statement stmt = null;
    ResultSet rs = null;
    try {
        dbConn = getDbConn(Connection.TRANSACTION_READ_COMMITTED);
        stmt = dbConn.createStatement();
        rs = stmt.executeQuery("select ntxn_next - 1 from NEXT_TXN_ID");
        if (!rs.next()) {
            throw new IllegalStateException("NEXT_TXN_ID is empty: DB is corrupted");
        }
        long highestAllocatedTxnId = rs.getLong(1);
        close(rs);
        rs = stmt.executeQuery("select min(txn_id) from TXNS where txn_state=" + quoteChar(TXN_OPEN));
        if (!rs.next()) {
            throw new IllegalStateException("Scalar query returned no rows?!?!!");
        }
        long commitHighWaterMark;//all currently open txns (if any) have txnid >= than commitHighWaterMark
        long lowestOpenTxnId = rs.getLong(1);
        if (rs.wasNull()) {
            //if here then there are no Open txns and  highestAllocatedTxnId must be
            //resolved (i.e. committed or aborted), either way
            //there are no open txns with id <= highestAllocatedTxnId
            //the +1 is there because "delete ..." below has < (which is correct for the case when
            //there is an open txn
            //Concurrency: even if new txn starts (or starts + commits) it is still true that
            //there are no currently open txns that overlap with any committed txn with 
            //commitId <= commitHighWaterMark (as set on next line).  So plain READ_COMMITTED is enough.
            commitHighWaterMark = highestAllocatedTxnId + 1;
        } else {
            commitHighWaterMark = lowestOpenTxnId;
        }
        int delCnt = stmt.executeUpdate("delete from WRITE_SET where ws_commit_id < " + commitHighWaterMark);
        LOG.info("Deleted " + delCnt + " obsolete rows from WRTIE_SET");
        dbConn.commit();
    } catch (SQLException ex) {
        LOG.warn("WriteSet GC failed due to " + getMessage(ex), ex);
    } finally {
        close(rs, stmt, dbConn);
    }
}

From source file:org.apache.hadoop.hive.metastore.MyXid.java

@Override
public Database getDatabase(String name) throws NoSuchObjectException, MetaException {
    Database db = null;//w  w  w  . jav a 2 s  . co m
    Connection con;
    name = name.toLowerCase();

    try {
        con = getSegmentConnection(name);
    } catch (MetaStoreConnectException e1) {
        LOG.error("get database error, db=" + name + ", msg=" + e1.getMessage());
        throw new NoSuchObjectException(e1.getMessage());
    } catch (SQLException e1) {
        LOG.error("get database error, db=" + name + ", msg=" + e1.getMessage());
        throw new MetaException(e1.getMessage());
    }

    Statement stmt = null;

    try {
        con.setTransactionIsolation(Connection.TRANSACTION_READ_COMMITTED);

        stmt = con.createStatement();
        String sql = "SELECT name, hdfs_schema, description, owner FROM DBS WHERE name='" + name + "'";

        ResultSet dbSet = stmt.executeQuery(sql);
        boolean isDBFind = false;

        while (dbSet.next()) {
            isDBFind = true;
            db = new Database();
            db.setName(dbSet.getString(1));
            db.setHdfsscheme(dbSet.getString(2));
            db.setDescription(dbSet.getString(3));
            db.setOwner(dbSet.getString(4));
            break;
        }

        dbSet.close();

        if (!isDBFind) {
            LOG.error("get database error, db=" + name);
            throw new NoSuchObjectException("database " + name + " does not exist!");
        }
    } catch (SQLException sqlex) {
        sqlex.printStackTrace();
        throw new MetaException(sqlex.getMessage());
    } finally {
        closeStatement(stmt);
        closeConnection(con);
    }

    return db;
}