Example usage for com.amazonaws.services.s3.model S3Object getObjectContent

List of usage examples for com.amazonaws.services.s3.model S3Object getObjectContent

Introduction

In this page you can find the example usage for com.amazonaws.services.s3.model S3Object getObjectContent.

Prototype

public S3ObjectInputStream getObjectContent() 

Source Link

Document

Gets the input stream containing the contents of this object.

Usage

From source file:exemplos.S3Sample.java

License:Open Source License

public static void main(String[] args) throws IOException {
    /*//from   w  w  w  .  j a va  2  s .  c o m
     * This credentials provider implementation loads your AWS credentials
     * from a properties file at the root of your classpath.
     *
     * Important: Be sure to fill in your AWS access credentials in the
     *            AwsCredentials.properties file before you try to run this
     *            sample.
     * http://aws.amazon.com/security-credentials
     */
    AmazonS3 s3 = new AmazonS3Client(new ClasspathPropertiesFileCredentialsProvider());
    Region usWest2 = Region.getRegion(Regions.US_WEST_2);
    s3.setRegion(usWest2);

    String bucketName = "my-first-s3-bucket-" + UUID.randomUUID();
    String key = "MyObjectKey";

    System.out.println("===========================================");
    System.out.println("Getting Started with Amazon S3");
    System.out.println("===========================================\n");

    try {
        /*
         * Create a new S3 bucket - Amazon S3 bucket names are globally unique,
         * so once a bucket name has been taken by any user, you can't create
         * another bucket with that same name.
         *
         * You can optionally specify a location for your bucket if you want to
         * keep your data closer to your applications or users.
         */
        System.out.println("Creating bucket " + bucketName + "\n");
        s3.createBucket(bucketName);

        /*
         * List the buckets in your account
         */
        System.out.println("Listing buckets");
        for (Bucket bucket : s3.listBuckets()) {
            System.out.println(" - " + bucket.getName());
        }
        System.out.println();

        /*
         * Upload an object to your bucket - You can easily upload a file to
         * S3, or upload directly an InputStream if you know the length of
         * the data in the stream. You can also specify your own metadata
         * when uploading to S3, which allows you set a variety of options
         * like content-type and content-encoding, plus additional metadata
         * specific to your applications.
         */
        System.out.println("Uploading a new object to S3 from a file\n");
        s3.putObject(new PutObjectRequest(bucketName, key, createSampleFile()));

        /*
         * Download an object - When you download an object, you get all of
         * the object's metadata and a stream from which to read the contents.
         * It's important to read the contents of the stream as quickly as
         * possibly since the data is streamed directly from Amazon S3 and your
         * network connection will remain open until you read all the data or
         * close the input stream.
         *
         * GetObjectRequest also supports several other options, including
         * conditional downloading of objects based on modification times,
         * ETags, and selectively downloading a range of an object.
         */
        System.out.println("Downloading an object");
        S3Object object = s3.getObject(new GetObjectRequest(bucketName, key));
        System.out.println("Content-Type: " + object.getObjectMetadata().getContentType());
        displayTextInputStream(object.getObjectContent());

        /*
         * List objects in your bucket by prefix - There are many options for
         * listing the objects in your bucket.  Keep in mind that buckets with
         * many objects might truncate their results when listing their objects,
         * so be sure to check if the returned object listing is truncated, and
         * use the AmazonS3.listNextBatchOfObjects(...) operation to retrieve
         * additional results.
         */
        System.out.println("Listing objects");
        ObjectListing objectListing = s3
                .listObjects(new ListObjectsRequest().withBucketName(bucketName).withPrefix("My"));
        for (S3ObjectSummary objectSummary : objectListing.getObjectSummaries()) {
            System.out.println(
                    " - " + objectSummary.getKey() + "  " + "(size = " + objectSummary.getSize() + ")");
        }
        System.out.println();

        /*
         * Delete an object - Unless versioning has been turned on for your bucket,
         * there is no way to undelete an object, so use caution when deleting objects.
         */
        System.out.println("Deleting an object\n");
        s3.deleteObject(bucketName, key);

        /*
         * Delete a bucket - A bucket must be completely empty before it can be
         * deleted, so remember to delete any objects from your buckets before
         * you try to delete them.
         */
        System.out.println("Deleting bucket " + bucketName + "\n");
        s3.deleteBucket(bucketName);
    } catch (AmazonServiceException ase) {
        System.out.println("Caught an AmazonServiceException, which means your request made it "
                + "to Amazon S3, but was rejected with an error response for some reason.");
        System.out.println("Error Message:    " + ase.getMessage());
        System.out.println("HTTP Status Code: " + ase.getStatusCode());
        System.out.println("AWS Error Code:   " + ase.getErrorCode());
        System.out.println("Error Type:       " + ase.getErrorType());
        System.out.println("Request ID:       " + ase.getRequestId());
    } catch (AmazonClientException ace) {
        System.out.println("Caught an AmazonClientException, which means the client encountered "
                + "a serious internal problem while trying to communicate with S3, "
                + "such as not being able to access the network.");
        System.out.println("Error Message: " + ace.getMessage());
    }
}

From source file:fi.yle.tools.aws.maven.SimpleStorageServiceWagon.java

License:Apache License

@Override
protected void getResource(String resourceName, File destination, TransferProgress transferProgress)
        throws TransferFailedException, ResourceDoesNotExistException {
    InputStream in = null;/*w  w w .  ja v  a 2s .c o  m*/
    OutputStream out = null;
    try {
        S3Object s3Object = this.amazonS3.getObject(this.bucketName, getKey(resourceName));

        in = s3Object.getObjectContent();
        out = new TransferProgressFileOutputStream(destination, transferProgress);

        IoUtils.copy(in, out);
    } catch (AmazonServiceException e) {
        throw new ResourceDoesNotExistException(String.format("'%s' does not exist", resourceName), e);
    } catch (FileNotFoundException e) {
        throw new TransferFailedException(String.format("Cannot write file to '%s'", destination), e);
    } catch (IOException e) {
        throw new TransferFailedException(
                String.format("Cannot read from '%s' and write to '%s'", resourceName, destination), e);
    } finally {
        IoUtils.closeQuietly(in, out);
    }
}

From source file:fr.eurecom.hybris.kvs.drivers.AmazonKvs.java

License:Apache License

public byte[] get(String key) throws IOException {
    try {//ww w. j a  va 2 s  .c o  m
        S3Object object = this.s3.getObject(new GetObjectRequest(this.rootContainer, key));
        return ByteStreams.toByteArray(object.getObjectContent());
    } catch (AmazonClientException e) {

        if (e instanceof AmazonS3Exception) {
            AmazonS3Exception as3e = (AmazonS3Exception) e;
            if (as3e.getStatusCode() == HttpStatus.SC_NOT_FOUND)
                return null;
        }

        throw new IOException(e);
    }
}

From source file:fsi_admin.JAwsS3Conn.java

License:Open Source License

private boolean descargarArchivo(HttpServletResponse response, StringBuffer msj, AmazonS3 s3, String S3BUKT,
        String nombre, String destino) {
    //System.out.println("AwsConn DescargarArchivo:" + nombre + ":nombre");

    try {//  w w  w. j a v a  2s  . co m
        System.out.println("DESCARGA BUCKET: " + S3BUKT + " OBJETO: " + nombre);
        S3Object object = s3.getObject(new GetObjectRequest(S3BUKT, nombre));
        //out.println("Content-Type: "  + object.getObjectMetadata().getContentType());
        //System.out.println("Content-Type: "  + object.getObjectMetadata().getContentType());
        byte[] byteArray = IOUtils.toByteArray(object.getObjectContent());
        ByteArrayInputStream bais = new ByteArrayInputStream(byteArray);
        JBajarArchivo fd = new JBajarArchivo();
        fd.doDownload(response, getServletConfig().getServletContext(), bais,
                object.getObjectMetadata().getContentType(), byteArray.length, destino);
        System.out.println("Content-Length: " + object.getObjectMetadata().getContentLength() + " BA: "
                + byteArray.length);
        return true;
    } catch (AmazonServiceException ase) {
        ase.printStackTrace();
        msj.append("Error de AmazonServiceException al descargar archivo de S3.<br>");
        msj.append("Mensaje: " + ase.getMessage() + "<br>");
        msj.append("Cdigo de Estatus HTTP: " + ase.getStatusCode() + "<br>");
        msj.append("Cdigo de Error AWS:   " + ase.getErrorCode() + "<br>");
        msj.append("Tipo de Error:       " + ase.getErrorType() + "<br>");
        msj.append("Request ID:       " + ase.getRequestId());
        return false;
    } catch (AmazonClientException ace) {
        ace.printStackTrace();
        msj.append("Error de AmazonClientException al descargar archivo de S3.<br>");
        msj.append("Mensaje: " + ace.getMessage());
        return false;
    } catch (IOException ace) {
        ace.printStackTrace();
        msj.append("Error de IOException al descargar archivo de S3.<br>");
        msj.append("Mensaje: " + ace.getMessage());
        return false;
    }

}

From source file:gobblin.aws.AWSSdkClient.java

License:Apache License

/***
 * Download a S3 object to local directory
 *
 * @param s3ObjectSummary S3 object summary for the object to download
 * @param targetDirectory Local target directory to download the object to
 * @throws IOException If any errors were encountered in downloading the object
 *///from   w  ww .j  a  v  a 2  s .co m
public void downloadS3Object(S3ObjectSummary s3ObjectSummary, String targetDirectory) throws IOException {

    final AmazonS3 amazonS3 = getS3Client();

    final GetObjectRequest getObjectRequest = new GetObjectRequest(s3ObjectSummary.getBucketName(),
            s3ObjectSummary.getKey());

    final S3Object s3Object = amazonS3.getObject(getObjectRequest);

    final String targetFile = StringUtils.removeEnd(targetDirectory, File.separator) + File.separator
            + s3Object.getKey();
    FileUtils.copyInputStreamToFile(s3Object.getObjectContent(), new File(targetFile));

    LOGGER.info("S3 object downloaded to file: " + targetFile);
}

From source file:gov.usgs.cida.iplover.util.ImageStorage.java

public static byte[] get(String uuid) throws IOException {

    AmazonS3 s3 = prepS3Client();/*from www. j a  v a 2 s.co  m*/

    String imageKey = KEY_BASE + "/" + uuid + ".jpg";

    S3Object object = s3.getObject(new GetObjectRequest(BUCKET_NAME, imageKey));

    return IOUtils.toByteArray(object.getObjectContent());
}

From source file:hu.mta.sztaki.lpds.cloud.entice.imageoptimizer.iaashandler.amazontarget.Storage.java

License:Apache License

/**
 * @param endpoint S3 endpoint URL//w ww.  jav a 2s .c  o m
 * @param accessKey Access key
 * @param secretKey Secret key
 * @param bucket Bucket name 
 * @param path Key name of the object to download (path + file name)
 * @param file Local file to download to 
 * @throws Exception On any error
 */
public static void download(String endpoint, String accessKey, String secretKey, String bucket, String path,
        File file) throws Exception {
    AmazonS3Client amazonS3Client = null;
    InputStream in = null;
    OutputStream out = null;
    try {
        AWSCredentials awsCredentials = new BasicAWSCredentials(accessKey, secretKey);
        ClientConfiguration clientConfiguration = new ClientConfiguration();
        clientConfiguration.setMaxConnections(MAX_CONNECTIONS);
        clientConfiguration.setMaxErrorRetry(PredefinedRetryPolicies.DEFAULT_MAX_ERROR_RETRY);
        clientConfiguration.setConnectionTimeout(ClientConfiguration.DEFAULT_CONNECTION_TIMEOUT);
        amazonS3Client = new AmazonS3Client(awsCredentials, clientConfiguration);
        S3ClientOptions clientOptions = new S3ClientOptions().withPathStyleAccess(true);
        amazonS3Client.setS3ClientOptions(clientOptions);
        amazonS3Client.setEndpoint(endpoint);
        S3Object object = amazonS3Client.getObject(new GetObjectRequest(bucket, path));
        in = object.getObjectContent();
        byte[] buf = new byte[BUFFER_SIZE];
        out = new FileOutputStream(file);
        int count;
        while ((count = in.read(buf)) != -1)
            out.write(buf, 0, count);
        out.close();
        in.close();
    } catch (AmazonServiceException x) {
        Shrinker.myLogger.info("download error: " + x.getMessage());
        throw new Exception("download exception", x);
    } catch (AmazonClientException x) {
        Shrinker.myLogger.info("download error: " + x.getMessage());
        throw new Exception("download exception", x);
    } catch (IOException x) {
        Shrinker.myLogger.info("download error: " + x.getMessage());
        throw new Exception("download exception", x);
    } finally {
        if (in != null) {
            try {
                in.close();
            } catch (Exception e) {
            }
        }
        if (out != null) {
            try {
                out.close();
            } catch (Exception e) {
            }
        }
        if (amazonS3Client != null) {
            try {
                amazonS3Client.shutdown();
            } catch (Exception e) {
            }
        }
    }
}

From source file:hydrograph.engine.spark.datasource.utils.AWSS3Util.java

License:Apache License

public void download(RunFileTransferEntity runFileTransferEntity) {
    log.debug("Start AWSS3Util download");

    File filecheck = new File(runFileTransferEntity.getLocalPath());
    if (runFileTransferEntity.getFailOnError())
        if (!(filecheck.exists() && filecheck.isDirectory())
                && !(runFileTransferEntity.getLocalPath().contains("hdfs://"))) {
            throw new AWSUtilException("Invalid local path");
        }//from  w  w  w. j a  v a2 s  . c  om
    boolean fail_if_exist = false;
    int retryAttempt = 0;
    int i;
    String amazonFileUploadLocationOriginal = null;
    String keyName = null;
    if (runFileTransferEntity.getRetryAttempt() == 0)
        retryAttempt = 1;
    else
        retryAttempt = runFileTransferEntity.getRetryAttempt();
    for (i = 0; i < retryAttempt; i++) {
        log.info("connection attempt: " + (i + 1));
        try {

            AmazonS3 s3Client = null;
            ClientConfiguration clientConf = new ClientConfiguration();
            clientConf.setProtocol(Protocol.HTTPS);
            if (runFileTransferEntity.getCrediationalPropertiesFile() == null) {
                BasicAWSCredentials creds = new BasicAWSCredentials(runFileTransferEntity.getAccessKeyID(),
                        runFileTransferEntity.getSecretAccessKey());
                s3Client = AmazonS3ClientBuilder.standard().withClientConfiguration(clientConf)
                        .withRegion(runFileTransferEntity.getRegion())
                        .withCredentials(new AWSStaticCredentialsProvider(creds)).build();
            } else {

                File securityFile = new File(runFileTransferEntity.getCrediationalPropertiesFile());

                PropertiesCredentials creds = new PropertiesCredentials(securityFile);

                s3Client = AmazonS3ClientBuilder.standard().withClientConfiguration(clientConf)
                        .withRegion(runFileTransferEntity.getRegion())
                        .withCredentials(new AWSStaticCredentialsProvider(creds)).build();
            }
            String s3folderName = null;
            String filepath = runFileTransferEntity.getFolder_name_in_bucket();
            if (filepath.lastIndexOf("/") != -1) {
                s3folderName = filepath.substring(0, filepath.lastIndexOf("/"));
                keyName = filepath.substring(filepath.lastIndexOf("/") + 1);

            } else {

                keyName = filepath;

            }
            log.debug("keyName is: " + keyName);
            log.debug("bucket name is:" + runFileTransferEntity.getBucketName());
            log.debug("Folder Name is" + runFileTransferEntity.getFolder_name_in_bucket());
            if (s3folderName != null) {
                amazonFileUploadLocationOriginal = runFileTransferEntity.getBucketName() + "/" + s3folderName;
            } else {
                amazonFileUploadLocationOriginal = runFileTransferEntity.getBucketName();
            }
            if (runFileTransferEntity.getLocalPath().contains("hdfs://")) {
                String outputPath = runFileTransferEntity.getLocalPath();
                String s1 = outputPath.substring(7, outputPath.length());
                String s2 = s1.substring(0, s1.indexOf("/"));
                File f = new File("/tmp");
                if (!f.exists())
                    f.mkdir();

                GetObjectRequest request = new GetObjectRequest(amazonFileUploadLocationOriginal, keyName);
                S3Object object = s3Client.getObject(request);
                if (runFileTransferEntity.getEncoding() != null)
                    object.getObjectMetadata().setContentEncoding(runFileTransferEntity.getEncoding());
                File fexist = new File(runFileTransferEntity.getLocalPath() + File.separatorChar + keyName);
                if (runFileTransferEntity.getOverwrite().trim().equalsIgnoreCase("Overwrite If Exists")) {
                    S3ObjectInputStream objectContent = object.getObjectContent();
                    IOUtils.copyLarge(objectContent, new FileOutputStream("/tmp/" + keyName));
                } else {
                    if (!(fexist.exists() && !fexist.isDirectory())) {
                        S3ObjectInputStream objectContent = object.getObjectContent();
                        IOUtils.copyLarge(objectContent, new FileOutputStream(
                                runFileTransferEntity.getLocalPath() + File.separatorChar + keyName));
                    } else {
                        fail_if_exist = true;
                        Log.error("File already exists");
                        throw new AWSUtilException("File already exists");
                    }
                }

                Configuration conf = new Configuration();
                conf.set("fs.defaultFS", "hdfs://" + s2);
                FileSystem hdfsFileSystem = FileSystem.get(conf);

                String s = outputPath.substring(7, outputPath.length());
                String hdfspath = s.substring(s.indexOf("/"), s.length());

                Path local = new Path("/tmp/" + keyName);
                Path hdfs = new Path(hdfspath);
                hdfsFileSystem.copyFromLocalFile(local, hdfs);

            } else {

                GetObjectRequest request = new GetObjectRequest(amazonFileUploadLocationOriginal, keyName);
                S3Object object = s3Client.getObject(request);
                if (runFileTransferEntity.getEncoding() != null)
                    object.getObjectMetadata().setContentEncoding(runFileTransferEntity.getEncoding());
                File fexist = new File(runFileTransferEntity.getLocalPath() + File.separatorChar + keyName);
                if (runFileTransferEntity.getOverwrite().trim().equalsIgnoreCase("Overwrite If Exists")) {
                    S3ObjectInputStream objectContent = object.getObjectContent();
                    IOUtils.copyLarge(objectContent, new FileOutputStream(
                            runFileTransferEntity.getLocalPath() + File.separatorChar + keyName));
                }

                else {
                    if (!(fexist.exists() && !fexist.isDirectory())) {
                        S3ObjectInputStream objectContent = object.getObjectContent();
                        IOUtils.copyLarge(objectContent, new FileOutputStream(
                                runFileTransferEntity.getLocalPath() + File.separatorChar + keyName));
                    } else {
                        fail_if_exist = true;
                        Log.error("File already exists");
                        throw new AWSUtilException("File already exists");
                    }
                }

            }
        }

        catch (AmazonServiceException e) {
            log.error("Amazon Service Exception", e);
            if (e.getStatusCode() == 403 || e.getStatusCode() == 404) {
                if (runFileTransferEntity.getFailOnError()) {
                    Log.error("Incorrect details provided.Please provide correct details", e);
                    throw new AWSUtilException("Incorrect details provided");
                } else {
                    Log.error("Unknown amezon exception occured", e);
                }

            }

            {
                try {
                    Thread.sleep(runFileTransferEntity.getRetryAfterDuration());
                } catch (Exception e1) {
                    Log.error("Exception occured while sleeping the thread");
                }
                continue;
            }

        } catch (Error e) {
            Log.error("Error occured while sleeping the thread");
            throw new AWSUtilException(e);
        } catch (Exception e) {
            log.error("error while transfering file", e);
            try {
                Thread.sleep(runFileTransferEntity.getRetryAfterDuration());
            } catch (Exception e1) {

            } catch (Error err) {
                Log.error("Error occured while downloading");
                throw new AWSUtilException(err);
            }
            continue;
        }
        done = true;
        break;
    }

    if (runFileTransferEntity.getFailOnError() && !done) {
        log.error("File transfer failed");
        throw new AWSUtilException("File transfer failed");
    } else if (!done) {
        log.error("File transfer failed but mentioned fail on error as false");
    }
    if (i == runFileTransferEntity.getRetryAttempt()) {
        if (runFileTransferEntity.getFailOnError()) {
            throw new AWSUtilException("File transfer failed");
        }
    }
    log.debug("Finished AWSS3Util download");
}

From source file:ics.uci.edu.amazons3.S3Sample.java

License:Open Source License

public static void main(String[] args) throws IOException {
    /*/*from   ww  w .  j  a v a2  s . c  om*/
     * This credentials provider implementation loads your AWS credentials
     * from a properties file at the root of your classpath.
     * 
     * Important: Be sure to fill in your AWS access credentials in the
     *            AwsCredentials.properties file before you try to run this
     *            sample.
     * http://aws.amazon.com/security-credentials
     */
    final AmazonS3 s3 = new AmazonS3Client(
            new BasicAWSCredentials("AKIAJTW5BOY6EXOGV2YQ", "PDcnFYIf9Hdo9GsKTEjLXretZ3yEg4mRCDQKjxu6"));

    String bucketName = "my-first-s3-bucket-" + UUID.randomUUID();
    String key = "MyObjectKey";

    System.out.println("===========================================");
    System.out.println("Getting Started with Amazon S3");
    System.out.println("===========================================\n");

    try {
        /*
         * Create a new S3 bucket - Amazon S3 bucket names are globally unique,
         * so once a bucket name has been taken by any user, you can't create
         * another bucket with that same name.
         *
         * You can optionally specify a location for your bucket if you want to
         * keep your data closer to your applications or users.
         */
        System.out.println("Creating bucket " + bucketName + "\n");
        s3.createBucket(bucketName);

        /*
         * List the buckets in your account
         */
        System.out.println("Listing buckets");
        for (Bucket bucket : s3.listBuckets()) {
            System.out.println(" - " + bucket.getName());
        }
        System.out.println();

        /*
         * Upload an object to your bucket - You can easily upload a file to
         * S3, or upload directly an InputStream if you know the length of
         * the data in the stream. You can also specify your own metadata
         * when uploading to S3, which allows you set a variety of options
         * like content-type and content-encoding, plus additional metadata
         * specific to your applications.
         */
        System.out.println("Uploading a new object to S3 from a file\n");
        s3.putObject(new PutObjectRequest(bucketName, key, createSampleFile()));

        /*
         * Download an object - When you download an object, you get all of
         * the object's metadata and a stream from which to read the contents.
         * It's important to read the contents of the stream as quickly as
         * possibly since the data is streamed directly from Amazon S3 and your
         * network connection will remain open until you read all the data or
         * close the input stream.
         *
         * GetObjectRequest also supports several other options, including
         * conditional downloading of objects based on modification times,
         * ETags, and selectively downloading a range of an object.
         */
        System.out.println("Downloading an object");
        S3Object object = s3.getObject(new GetObjectRequest(bucketName, key));
        System.out.println("Content-Type: " + object.getObjectMetadata().getContentType());
        displayTextInputStream(object.getObjectContent());

        /*
         * List objects in your bucket by prefix - There are many options for
         * listing the objects in your bucket.  Keep in mind that buckets with
         * many objects might truncate their results when listing their objects,
         * so be sure to check if the returned object listing is truncated, and
         * use the AmazonS3.listNextBatchOfObjects(...) operation to retrieve
         * additional results.
         */
        System.out.println("Listing objects");
        ObjectListing objectListing = s3
                .listObjects(new ListObjectsRequest().withBucketName(bucketName).withPrefix("My"));
        for (S3ObjectSummary objectSummary : objectListing.getObjectSummaries()) {
            System.out.println(
                    " - " + objectSummary.getKey() + "  " + "(size = " + objectSummary.getSize() + ")");
        }
        System.out.println();

        /*
         * Delete an object - Unless versioning has been turned on for your bucket,
         * there is no way to undelete an object, so use caution when deleting objects.
         */
        System.out.println("Deleting an object\n");
        s3.deleteObject(bucketName, key);

        /*
         * Delete a bucket - A bucket must be completely empty before it can be
         * deleted, so remember to delete any objects from your buckets before
         * you try to delete them.
         */
        System.out.println("Deleting bucket " + bucketName + "\n");
        s3.deleteBucket(bucketName);
    } catch (AmazonServiceException ase) {
        System.out.println("Caught an AmazonServiceException, which means your request made it "
                + "to Amazon S3, but was rejected with an error response for some reason.");
        System.out.println("Error Message:    " + ase.getMessage());
        System.out.println("HTTP Status Code: " + ase.getStatusCode());
        System.out.println("AWS Error Code:   " + ase.getErrorCode());
        System.out.println("Error Type:       " + ase.getErrorType());
        System.out.println("Request ID:       " + ase.getRequestId());
    } catch (AmazonClientException ace) {
        System.out.println("Caught an AmazonClientException, which means the client encountered "
                + "a serious internal problem while trying to communicate with S3, "
                + "such as not being able to access the network.");
        System.out.println("Error Message: " + ace.getMessage());
    }
}

From source file:io.crate.execution.engine.collect.files.S3FileInput.java

License:Apache License

@Override
public InputStream getStream(URI uri) throws IOException {
    if (client == null) {
        client = clientBuilder.client(uri);
    }/*from ww  w  .ja  va  2s .c  o  m*/
    S3Object object = client.getObject(uri.getHost(), uri.getPath().substring(1));

    if (object != null) {
        return object.getObjectContent();
    }
    return null;
}