top of page
  • Cloud Life Team

The best methods to move data from on premises data center to AWS S3

Amazon S3 is an object storage service that offers scalability, data availability, security, and performance. Use S3 to store and protect any amount of data for a range of use cases, such as data lakes, static websites, mobile applications, data backup, archive data, enterprise applications, IoT devices, and big data analytics. S3 is object store ranging in size from 0 bytes to 5TB. The question then comes up how do you get data from on premises to S3.

Moving data from an on-premises data center to Amazon S3 (Simple Storage Service) involves planning and understanding the data to be moved. Here are methods and practices and data issues you need to be informed about to achieve a successful data migration.

1. Know Your Data: To help you choose the appropriate migration strategy and tools it helps to know the data type, file size (min and max bytes), total files to be moved and any time constraints. For a single file or two use the console or CLI from your terminal for multiple file uploads choose from one of th following methods.

2. Review the different Migration Strategies which is the center of this discussion: These are a few common migration strategies:

  • Online Data Transfer: Use tools like AWS DataSync to transfer data over the internet. It is a fully managed data transfer service that is designed to automate and accelerate moving large amounts of data between on-premises storage systems and Amazon S3. It's particularly useful for scenarios with many files or large datasets. DataSync offers automatic parallelization, encryption, and scheduling capabilities, making it an ideal choice for automated data transfers. DataSync can copy data into any S3 storage class and includes automatic encryption and data integrity validation to help make sure that your data arrives securely, intact, and ready to use. DataSync can transfer NFS, SMB file servers also HDFS and objects storage as well as other cloud storage systems e.g. Azure or Google.

  • Offline Data Transfer, AWS Snowball: For large-scale data transfers, you can use Snowball devices, which are rugged and secure physical appliances that you can use to ship data to AWS for offline transfer. This will depend on how much time you have allocated to the transfer. Since this uses physical device that must be shipped in both directions. Use AWS Snowball, or AWS Snowball Edge if the data volume is large, typically >70 TB and Snowmobile for data in the Petabyte range.

  • Storage Gateway: provides hybrid cloud storage solutions that enable you to seamlessly integrate your on-premises environment with AWS S3. S3 can be used as storage for your entire system with a cache remaining on premises. AWS Storage Gateway can be installed on-premises to seamlessly bridge your on-premises environment with AWS S3. Your network bandwidth will be an important consideration if you decide to use this method.

  • Using the console for single file or limited file transfer. Straight forward create your bucket and then select file upload. You will need the correct IAM permissions to list bucket and put bucket.

  • AWS CLI: a command-line interface allows you to interact with AWS services, including S3. You can use commands like aws s3 sync or aws s3 cp to automate the transfer of files and directories from your on-premises environment to S3. While this approach is ideal for single files you would need to consider scripting and scheduling using automation tools.

  • AWS Data Pipeline: is a web service that helps you schedule, automate, and orchestrate the movement and transformation of data between different AWS services and on-premises data sources. While it's not solely designed for S3 transfers, you can use it to create pipelines that move data from your on-premises location to S3.

  • AWS SDKs: AWS Software Development Kits (SDKs) are available for various programming languages. They provide APIs and libraries to interact with AWS services programmatically. You can use the SDKs to write custom scripts that automate the process of transferring many files to S3.

  • Custom Scripts and Automation Tools: If you have specific requirements or need fine-grained control over the transfer process, you can develop custom scripts or use automation tools like AWS Lambda, AWS Step Functions, or third-party tools to automate the transfer of many files to S3.

  • AWS Transfer Family: AWS Transfer Family provides fully managed Secure File Transfer Protocol (SFTP), FTPS (FTP Secure), and FTP servers for transferring files into and out of Amazon S3. While this service is more focused on serving as an SFTP gateway, it can still be used to automate transfers to S3. AWS Transfer Family is a secure transfer service that enables you to transfer files into and out of AWS storage services. Transfer Family is part of the AWS Cloud platform: see Getting started with AWS to learn more and to start building cloud applications with Amazon Web Services.

  • S3 multi-part uploading: Can be scaled out beyond a single process on a single machine. For example, a single host is able to upload a 3.5 GB file to Amazon S3 at 119 MB/s, whereas a coordinated cluster of 10 machines uploaded a 25.8 GB file at an average rate of 715 MB/s (inclusive of the time to coordinate and manage the cluster). Finding ways to scale out and work with high-scale systems like Amazon S3 can significantly improve the ability to work with and manage big data assets and processes. Find out more here

  • Batch Upload Files to S3 using the CLI: Using this method you can build your own scripts for backing up your files to the cloud and easily retrieve them. This will make automating your backup process faster, more reliable, and more programmatic. You can use this approach to build a scheduled task (or cron job) to handle your backup operations.

3. Ensure Data Security: Use encryption to protect your data both during transit and at rest. AWS S3 offers server-side encryption options, and you can also use SSL/TLS for data in transit or encrypt at the customer side prior to transfer. You can also use bucket policies to prevent S3 accepting any files that are not encrypted.

4. Know your Network Bandwidth:      If you have a dedicated network connection between your on-premises data center and AWS, you can use AWS Direct Connect for a faster and more reliable data transfer. This in combination with a VPN can provide necessary encryption.

5. Data Validation and Testing: While this may seem obvious you should test the data transfer method you have chosen on a smaller scale before attempting a full-scale migration. Then ensure that your migrated test data is consistent and error-free.

6. Monitor & Optimize: During the migration, monitor the progress and performance of your data transfer. AWS provides monitoring tools like CloudWatch to help you track transfer metrics and detect any issues.

Want to learn more? Here are some “How to” Guides. These are provided courtesy of Amazon and can tak anywherefro a few minutes to follow along to a few hours.


Recent Posts

See All

Steps to Upgrade IPv4 to IPv6

As you probably know AWS is now charging for the use of every public IPv4 IP address used in your environment. The current rate is $.005/hr and while that doesn’t sound like much it amounts to approxi

How to Read IPv6 Addresses

IPv6 (Internet Protocol version 6) addressing has been around about 20 years now. It was implemented when it became apparent that we would run out of IPv4 addresses. The move to IPv6 has been slow but

Discover How Many IPv4 Addresses You will be Charged For

Starting Feb 1, 2024 AWS will charge your account for every IPv4 in your account whether it is attached or not. That is approx. $45/year for every single IPv4. Here are AWS services that can assign pu


bottom of page