Amazon CloudFront is a fast content delivery network (CDN) service that securely delivers data, videos, applications, and APIs to customers globally with low latency, high transfer speeds, all within a developer-friendly environment. CloudFront is integrated with AWS – both physical locations that are directly connected to the AWS global infrastructure, as well as other AWS services. CloudFront works seamlessly with services including AWS Shield for DDoS mitigation, Amazon S3, Elastic Load Balancing or Amazon EC2 as origins for your applications, and Lambda@Edge to run custom code closer to customers’ users and to customize the user experience. Lastly, if you use AWS origins such as Amazon S3, Amazon EC2 or Elastic Load Balancing, you don’t pay for any data transferred between these services and CloudFront.
Amazon CloudFront is a web service that speeds up distribution of your static and dynamic web content, such as .html, .css, .js, and image files, to your users. CloudFront delivers your content through a worldwide network of data centers called edge locations. When a user requests content that you’re serving with CloudFront, the user is routed to the edge location that provides the lowest latency (time delay), so that content is delivered with the best possible performance.

Use cases:

When you deploy resources to a S3 bucket that uses Cloudfront for distribution, you need to invalidate those resources so they can reflect new changes.
Use CLI
aws cloudfront –profile awsProfile create-invalidation –distribution-id distribution_id –paths “/*”
Example: aws cloudfront –profile folauk110 create-invalidation –distribution-id 123321test –paths “/*”
AWS Cloudfront Developer Guide
Amazon Simple Storage Service (Amazon S3) is an object storage service that offers industry-leading scalability, data availability, security, and performance. This means customers of all sizes and industries can use it to store and protect any amount of data for a range of use cases, such as websites, mobile applications, backup and restore, archive, enterprise applications, IoT devices, and big data analytics. Amazon S3 provides easy-to-use management features so you can organize your data and configure finely-tuned access controls to meet your specific business, organizational, and compliance requirements. Amazon S3 is designed for 99.999999999% (11 9’s) of durability, and stores data for millions of applications for companies all around the world.
Use cases:
Overview of S3
Amazon S3 has a simple web services interface that you can use to store and retrieve any amount of data, at any time, from anywhere on the web.
S3 Bucket
S3 Bucket Restrictions and Limitations
Rules for bucket naming
When you use server-side encryption, Amazon S3 encrypts an object before saving it to disk in its data centers and decrypts it when you download the objects.
Object key (or key name) uniquely identifies the object in a bucket. Object metadata is a set of name-value pairs. You can set object metadata at the time you upload it. After you upload the object, you cannot modify object metadata. The only way to modify object metadata is to make a copy of the object and set the metadata.
The Amazon S3 data model is a flat structure: you create a bucket, and the bucket stores objects. There is no hierarchy of sub buckets or subfolders. However, you can infer logical hierarchy using key name prefixes and delimiters as the Amazon S3 console does. The Amazon S3 console supports a concept of folders.
Amazon S3 supports buckets and objects, and there is no hierarchy in Amazon S3. However, the prefixes and delimiters in an object key name enable the Amazon S3 console and the AWS SDKs to infer hierarchy and introduce the concept of folders.
System Metadata – For each object stored in a bucket, Amazon S3 maintains a set of system metadata. Amazon S3 processes this system metadata as needed. For example, Amazon S3 maintains object creation date and size metadata and uses this information as part of object management.
User-defined Metadata – When uploading an object, you can also assign metadata to the object.

AmazonS3 s3Client = AmazonS3ClientBuilder.standard()
.withCredentials(new ProfileCredentialsProvider())
.withRegion(clientRegion)
.build();
if (!s3Client.doesBucketExistV2(bucketName)) {
// Because the CreateBucketRequest object doesn't specify a region, the
// bucket is created in the region specified in the client.
s3Client.createBucket(new CreateBucketRequest(bucketName));
// Verify that the bucket was created by retrieving it and checking its location.
String bucketLocation = s3Client.getBucketLocation(new GetBucketLocationRequest(bucketName));
System.out.println("Bucket location: " + bucketLocation);
}
AmazonS3 s3Client = AmazonS3ClientBuilder.standard()
.withRegion(clientRegion)
.build();
// Upload a text string as a new object.
s3Client.putObject(bucketName, stringObjKeyName, "Uploaded String Object");
// Upload a file as a new object with ContentType and title specified.
PutObjectRequest request = new PutObjectRequest(bucketName, fileObjKeyName, new File(fileName));
ObjectMetadata metadata = new ObjectMetadata();
metadata.setContentType("plain/text");
metadata.addUserMetadata("x-amz-meta-title", "someTitle");
request.setMetadata(metadata);
s3Client.putObject(request);
You need to santize your key names because they must be presentable as urls. Read here about that.
So I have a method to strip out invalid characters from file name because creating a key off of it.
public static String replaceInvalidCharacters(String fileName) {
/**
* Valid characters<br>
* alphabets a-z <br>
* digits 0-9 <br>
* underscore _ <br>
* dash - <br>
*
*/
String alphaAndDigits = "[^a-zA-Z0-9._-]+";
// remove invalid characters
String newFileName = fileName.replaceAll(alphaAndDigits, "");
return newFileName;
}
AmazonS3 s3Client = AmazonS3ClientBuilder.standard()
.withRegion(clientRegion)
.withCredentials(new ProfileCredentialsProvider())
.build();
// Get an object and print its contents.
System.out.println("Downloading an object");
fullObject = s3Client.getObject(new GetObjectRequest(bucketName, key));
System.out.println("Content-Type: " + fullObject.getObjectMetadata().getContentType());
System.out.println("Content: ");
displayTextInputStream(fullObject.getObjectContent());
// Get a range of bytes from an object and print the bytes.
GetObjectRequest rangeObjectRequest = new GetObjectRequest(bucketName, key)
.withRange(0, 9);
objectPortion = s3Client.getObject(rangeObjectRequest);
System.out.println("Printing bytes retrieved.");
displayTextInputStream(objectPortion.getObjectContent());
// Get an entire object, overriding the specified response headers, and print the object's content.
ResponseHeaderOverrides headerOverrides = new ResponseHeaderOverrides()
.withCacheControl("No-cache")
.withContentDisposition("attachment; filename=example.txt");
GetObjectRequest getObjectRequestHeaderOverride = new GetObjectRequest(bucketName, key)
.withResponseHeaders(headerOverrides);
headerOverrideObject = s3Client.getObject(getObjectRequestHeaderOverride);
displayTextInputStream(headerOverrideObject.getObjectContent());
AmazonS3 s3Client = AmazonS3ClientBuilder.standard()
.withRegion(clientRegion)
.withCredentials(new ProfileCredentialsProvider())
.build();
// Set the presigned URL to expire after one hour.
java.util.Date expiration = new java.util.Date();
long expTimeMillis = expiration.getTime();
expTimeMillis += 1000 * 60 * 60;
expiration.setTime(expTimeMillis);
// Generate the presigned URL.
System.out.println("Generating pre-signed URL.");
GeneratePresignedUrlRequest generatePresignedUrlRequest =
new GeneratePresignedUrlRequest(bucketName, objectKey)
.withMethod(HttpMethod.GET)
.withExpiration(expiration);
URL url = s3Client.generatePresignedUrl(generatePresignedUrlRequest);
System.out.println("Pre-Signed URL: " + url.toString());
AmazonS3 s3Client = AmazonS3ClientBuilder.standard()
.withRegion(clientRegion)
.withCredentials(new ProfileCredentialsProvider())
.build();
// Set the presigned URL to expire after one hour.
java.util.Date expiration = new java.util.Date();
long expTimeMillis = expiration.getTime();
expTimeMillis += 1000 * 60 * 60;
expiration.setTime(expTimeMillis);
// Generate the presigned URL.
System.out.println("Generating pre-signed URL.");
GeneratePresignedUrlRequest generatePresignedUrlRequest =
new GeneratePresignedUrlRequest(bucketName, objectKey)
.withMethod(HttpMethod.GET)
.withExpiration(expiration);
URL url = s3Client.generatePresignedUrl(generatePresignedUrlRequest);
System.out.println("Pre-Signed URL: " + url.toString());
How to create a S3 bucket from AWS CLI
aws s3 mb s3://mybucket
aws s3 cp test.txt s3://mybucket/test2.txt --expires 2014-10-01T20:30:00Z
aws s3 mv test.txt s3://mybucket/test2.txt // upload all contents within the current directory to mybucket aws s3 mv . s3://mybucket
Download files from a S3 bucket
aws s3 mv s3://mybucket/test.txt test2.txt //download all files in mybucket to a local director(local_mybucket) aws s3 mv s3://mybucket local_mybucket --recursive
Sync files
// sync(upload all files) within the current directory to S3 mybucket aws s3 sync s3://mybucket . // sync mybucket to mybucket2 aws s3 sync s3://mybucket s3://mybucket2 // download all content of mybucket to the current directory aws s3 sync s3://mybucket . --recursive //any files existing in the local directory but not existing in bucket will be deleted. aws s3 sync . s3://mybucket --delete // all files matching the pattern existing both in s3 and locally will be excluded from the sync. aws s3 sync . s3://mybucket --exclude "*.jpg"
List s3 buckets within your account
aws s3api list-buckets //The query option filters the output of list-buckets down to only the bucket names. aws s3api list-buckets --query "Buckets[].Name"
aws s3api list-objects --bucket bucketName // get objects that start with (--prefix) aws s3api list-objects --bucket sidecarhealth-dev-file-form --prefix prefixValue
Amazon Elasticache is an in-memory data store. It also works as a cache data store to support the most demanding applications requiring sub-millisecond response times. You no longer need to perform management tasks such as hardware provisioning, software patching, setup, configuration, monitoring, failure recovery, and backups. ElastiCache continuously monitors your clusters to keep your workloads up and running so that you can focus on higher-value application development.
Elasticache sits between your application and your database. When your application needs to query the database, it first checks with Elasticache, if the data is there then it will return that data. if the data is not in Elasticache then it will query the database. This increases the performance of your application significantly.

Amazon Elasticache Redis
Features to enhance reliability
Use cases of when to use Elasticache
What should I cache?
Consider caching your data if the following is true: It is slow or expensive to acquire when compared to cache retrieval. It is accessed with sufficient frequency. It is relatively static, or if rapidly changing, staleness is not a significant issue.

Choose Memcached if:
Choose Redis if:

Caching Strategies
Lazy Loading –Lazy loading is a caching strategy that loads data into the cache only when necessary. How it works is that ElastiCache is an in-memory key/value store that sits between your application and the data store (database) that it accesses. Whenever your application requests data, it first makes the request to the ElastiCache cache. If the data exists in the cache and is current, ElastiCache returns the data to your application. If the data does not exist in the cache, or the data in the cache has expired, your application requests the data from your data store which returns the data to your application. Your application then writes the data received from the store to the cache so it can be more quickly retrieved next time it is requested.

Advantages of Lazy Loading
Disadvantages of Lazy Loading
Write Through – The write-through strategy adds data or updates data in the cache whenever data is written to the database.
Advantages of Write Through
Disadvantages of Write Through
Adding TTL Stragegy – Lazy loading allows for stale data, but won’t fail with empty nodes. Write through ensures that data is always fresh, but may fail with empty nodes and may populate the cache with superfluous data. By adding a time to live (TTL) value to each write, we are able to enjoy the advantages of each strategy and largely avoid cluttering up the cache with superfluous data. Time to live (TTL) is an integer value that specifies the number of seconds (Redis can specify seconds or milliseconds) until the key expires. When an application attempts to read an expired key, it is treated as though the key is not found, meaning that the database is queried for the key and the cache is updated. This does not guarantee that a value is not stale, but it keeps data from getting too stale and requires that values in the cache are occasionally refreshed from the database.
Amazon ElastiCache for Memcached is a Memcached-compatible in-memory key-value store service that can be used as a cache or a data store. It delivers the performance, ease-of-use, and simplicity of Memcached. ElastiCache for Memcached is fully managed, scalable, and secure – making it an ideal candidate for use cases where frequently accessed data must be in-memory. It is a popular choice for use cases such as Web, Mobile Apps, Gaming, Ad-Tech, and E-Commerce.
Amazon ElastiCache for Memcached is a great choice for implementing an in-memory cache to decrease access latency, increase throughput, and ease the load off your relational or NoSQL database. Amazon ElastiCache can serve frequently requested items at sub-millisecond response times , and enables you to easily scale for higher loads without growing the costlier backend database layer. Database query results caching, persistent session caching, and full-page caching are all popular examples of caching with ElastiCache for Memcached.
Session stores are easy to create with Amazon ElastiCache for Memcached. Simply use the Memcached hash table, that can be distributed across multiple nodes. Scaling the session store is as easy as adding a node and updating the clients to take advantage of the new node.
Features to enhance reliability:
Dynamodb is a low latency NoSQL database like MongoDB or Cassandra. It is fully managed by AWS so you don’t have to maintain any server for your database. Literally, there are no servers to provision, patch, or manage and no software to install, maintain or operate. DynamoDB automatically scales tables up and down to adjust for capacity and maintain performance. It supports both document and key-value data models. It uses SSD storage so it is very fast and supports Multi-AZ. DynamoDB global tables replicate your data across multiple AWS Regions to give you fast, local access to data for your globally distributed applications. For use cases that require even faster access with microsecond latency, DynamoDB Accelerator (DAX) provides a fully managed in-memory cache.
As with other AWS services, DynamoDB requires a role or user to have the right privileges to access DynamoDB. Make sure you add DynamoDB full access to the role or user(access keys) your server is using. You can also use a special IAM condition to restrict users to only access their data.
DynamoDB supports ACID transactions to enable you to build business-critical applications at scale. DynamoDB encrypts all data by default and provides fine-grained identity and access control on all your tables.
DynamoDB provides both provisioned and on-demand capacity modes so that you can optimize costs by specifying capacity per workload or paying for only the resources you consume.
When to use:
a. Gaming
b. User, vehicle, and driver data stores
c. Ads technology
d. Player session history data stores
e. Inventory tracking and fulfillment
f. Shopping carts
g. User transactions
There are two ways Dynamodb supports read:
1. Eventually consistent read – When data is saved to Dynamodb, it takes about 1 second for the data to propagate across multiple availability zones. This does not guarantee that your users will see the recent data from 1 second ago.
2. Strongly consistent read – This returns the most recent or up to date data. It will detect if there is a write operation happening at the time. If there is, then it will wait to read data afterward which results in a longer but more consistent read.
DynamoDB Tables
DynamoDB Primary Keys
1. Unique Partition Key – This key must be unique across items or rows i.e user_id. This value is input into a hash function which calculates the physical location of which data will be stored.
2. Composite Key (Partition key + Sort key) – This is useful for post or comment tables where rows belong to another entity i.e user_id as the partition key and post_timestamp for the sort key. Two or more items can have the same partition key but they must have different sort keys. These items stored physically together but sorted with their sort key.
DynamoDB Indexes
Local Secondary Index
– Can only be created when creating your table and you cannot change, remove, or add it later.
– Has the same partition as your original table.
– Has a different sort key from your original table.
– Queries based on the local secondary indexes are faster than regular queries.
Global Secondary Index
– Can add when creating your table or later.
– Has different partition and sort keys.
– Speeds up queries.
DynamoDB with Java. I am using DynamoDB wrapper.
For local development, I am using a docker image. There are other ways to install DynamoDB locally.
Run DynamoDB locally on your computer
@Bean
public AmazonDynamoDB amazonDynamoDB() {
AmazonDynamoDB client = AmazonDynamoDBClientBuilder.standard()
.withCredentials(amazonAWSCredentialsProvider())
.withEndpointConfiguration(new AwsClientBuilder.EndpointConfiguration(amazonDynamoDBEndpoint, Regions.US_WEST_2.getName()))
.build();
return client;
}
@Bean
public DynamoDBMapper dynamoDBMapper() {
return new DynamoDBMapper(amazonDynamoDB());
}
@DynamoDBTable(tableName="user")
public class User implements Serializable {
private static final long serialVersionUID = 1L;
@DynamoDBHashKey(attributeName="uuid")
@DynamoDBAttribute
private String uuid;
@DynamoDBAttribute
private String email;
@DynamoDBAttribute
private String firstName;
@DynamoDBAttribute
private String lastName;
@DynamoDBAttribute
private String phoneNumber;
@DynamoDBAttribute
private Date createdAt;
// setters and getters
}
@Repository
public class UserRepositoryImp implements UserRepository {
private Logger log = LoggerFactory.getLogger(this.getClass());
@Autowired
private AmazonDynamoDB amazonDynamoDB;
@Autowired
private DynamoDBMapper dynamoDBMapper;
@Override
public User create(User user) {
user.setUuid(RandomGeneratorUtils.getUserUuid());
user.setCreatedAt(new Date());
dynamoDBMapper.save(user);
return getById(user.getUuid());
}
@Override
public User getById(String id) {
User user = dynamoDBMapper.load(User.class, id);
return user;
}
@Override
public List<User> getAllUser() {
PaginatedScanList<User> users = dynamoDBMapper.scan(User.class, new DynamoDBScanExpression());
return (users!=null) ? users.subList(0, users.size()) : null;
}
@Override
public boolean createTable() {
// check if table has been created
try {
DescribeTableResult describeTableResult = amazonDynamoDB.describeTable("user");
if(describeTableResult.getTable()!=null){
log.debug("user table has been created already!");
return true;
}
} catch (Exception e) {
}
// table hasn't been created so start a createTableRequest
CreateTableRequest createTableRequest = dynamoDBMapper.generateCreateTableRequest(User.class);
createTableRequest.withProvisionedThroughput(new ProvisionedThroughput(5L,5L));
// create table
CreateTableResult createTableResult = amazonDynamoDB.createTable(createTableRequest);
long count = createTableResult.getTableDescription().getItemCount();
log.debug("item count={}",count);
return false;
}
}
Transaction
Here is an example of sending money(balance) from one user to another.
@Override
public boolean tranferBalance(double amount, User userA, User userB) {
final String USER_TABLE_NAME = "user";
final String USER_PARTITION_KEY = "userid";
try {
// user A
HashMap<String, AttributeValue> userAKey = new HashMap<>();
userAKey.put(USER_PARTITION_KEY, new AttributeValue(userA.getUuid()));
ConditionCheck checkUserAValid = new ConditionCheck()
.withTableName(USER_TABLE_NAME)
.withKey(userAKey)
.withConditionExpression("attribute_exists(" + USER_PARTITION_KEY + ")");
Map<String, AttributeValue> expressionAttributeValuesA = new HashMap<>();
expressionAttributeValuesA.put(":balance", new AttributeValue().withN("" + (userA.getBalance() - amount)));
Update withdrawFromA = new Update().withTableName(USER_TABLE_NAME).withKey(userAKey)
.withUpdateExpression("SET balance = :balance")
.withExpressionAttributeValues(expressionAttributeValuesA);
log.debug("user A setup!");
// user B
HashMap<String, AttributeValue> userBKey = new HashMap<>();
userAKey.put(USER_PARTITION_KEY, new AttributeValue(userB.getUuid()));
ConditionCheck checkUserBValid = new ConditionCheck()
.withTableName(USER_TABLE_NAME)
.withKey(userBKey)
.withConditionExpression("attribute_exists(" + USER_PARTITION_KEY + ")");
Map<String, AttributeValue> expressionAttributeValuesB = new HashMap<>();
expressionAttributeValuesB.put(":balance", new AttributeValue().withN("" + (userB.getBalance() + amount)));
Update depositToB = new Update().withTableName(USER_TABLE_NAME).withKey(userBKey)
.withUpdateExpression("SET balance = :balance")
.withExpressionAttributeValues(expressionAttributeValuesB);
log.debug("user B setup!");
HashMap<String, AttributeValue> withdrawItem = new HashMap<>();
withdrawItem.put(USER_PARTITION_KEY, new AttributeValue(userA.getUuid()));
withdrawItem.put("balance", new AttributeValue("100"));
// actions
Collection<TransactWriteItem> actions = Arrays.asList(
new TransactWriteItem().withConditionCheck(checkUserAValid),
new TransactWriteItem().withConditionCheck(checkUserBValid),
new TransactWriteItem().withUpdate(withdrawFromA),
new TransactWriteItem().withUpdate(depositToB));
log.debug("actions setup!");
// transaction request
TransactWriteItemsRequest withdrawTransaction = new TransactWriteItemsRequest()
.withTransactItems(actions)
.withReturnConsumedCapacity(ReturnConsumedCapacity.TOTAL);
log.debug("transaction request setup!");
// Execute the transaction and process the result.
TransactWriteItemsResult transactWriteItemsResult = amazonDynamoDB.transactWriteItems(withdrawTransaction);
log.debug("consumed capacity={}",ObjectUtils.toJson(transactWriteItemsResult.getConsumedCapacity()));
return (transactWriteItemsResult.getConsumedCapacity()!=null) ? true : false;
} catch (ResourceNotFoundException e) {
log.error("One of the table involved in the transaction is not found " + e.getMessage());
} catch (InternalServerErrorException e) {
log.error("Internal Server Error " + e.getMessage());
} catch (TransactionCanceledException e) {
log.error("Transaction Canceled " + e.getMessage());
} catch (Exception e) {
log.error("Exception, msg={}",e.getLocalizedMessage());
}
return false;
}
August 5, 2019 Amazon Relational Database Service (Amazon RDS) is a web service that makes it easier to set up, operate, and scale a relational database in the cloud. AWS RDS takes over many of the difficult or tedious management tasks of a relational database. When you use Amazon RDS, you can choose to use on-demand DB instances or reserved DB instances.
Relational Database Types
What AWS does with RDS?
DB Instances
A DB instance can contain multiple user-created databases, and you can access it by using the same tools and applications that you use with a stand-alone database instance. You can create and modify a DB instance by using the AWS Command Line Interface, the Amazon RDS API, or the AWS Management Console.
You can select the DB instance that best meets your needs. If your needs change over time, you can change DB instances. DB instance storage comes in three types: Magnetic, General Purpose (SSD), and Provisioned IOPS (PIOPS). They differ in performance characteristics and price, allowing you to tailor your storage performance and cost to the needs of your database.
Security
A security group controls access to a DB instance. It does so by allowing access to IP address ranges or Amazon EC2 instances that you specify.
There are several ways that you can track the performance and health of a DB instance. You can use the free Amazon CloudWatch service to monitor the performance and health of a DB instance; performance charts are shown in the Amazon RDS console. You can subscribe to Amazon RDS events to be notified when changes occur with a DB instance, DB Snapshot, DB parameter group, or DB security group.
Get list of database instances
aws rds describe-db-instances
aws rds start-db-instance --db-instance-identifier test-instance
aws rds stop-db-instance --db-instance-identifier test-instance
aws rds reboot-db-instance --db-instance-identifier test-instance