Amazon API Gateway is a fully managed service that makes it easy for developers to create, publish, maintain, monitor, and secure APIs at any scale. API Gateway handles all the tasks involved in accepting and processing up to hundreds of thousands of concurrent API calls, including traffic management, authorization and access control, monitoring, and API version management. API Gateway has no minimum fees or startup costs. You pay only for the API calls you receive and the amount of data transferred out and, with the API Gateway tiered pricing model, you can reduce your cost as your API usage scales.

AWS API Gateway Developer Guide
August 5, 2019AWS Lambda lets you run code without provisioning or managing servers. You pay only for the compute time you consume – there is no charge when your code is not running. With Lambda, you can run code for virtually any type of application or backend service – all with zero administration. Just upload your code and Lambda takes care of everything required to run and scale your code with high availability. You can set up your code to automatically trigger from other AWS services or call it directly from any web or mobile app.
You can use AWS Lambda to run your code in response to events, such as changes to data in an Amazon S3 bucket or an Amazon DynamoDB table; to run your code in response to HTTP requests using Amazon API Gateway; or invoke your code using API calls made using AWS SDKs. With these capabilities, you can use Lambda to easily build data processing triggers for AWS services like Amazon S3 and Amazon DynamoDB, process streaming data stored in Kinesis, or create your own back end that operates at AWS scale, performance, and security.
When to use Lambda?

GET and PUT, to specific Lambda functions. Alternatively, you could add a special method named ANY to map all supported methods (GET, POST, PATCH, DELETE) to your Lambda function. When you send an HTTPS request to the API endpoint, the Amazon API Gateway service invokes the corresponding Lambda function.DynamoDBManager) and define one method (POST) on it. The method is backed by a Lambda function (LambdaFunctionOverHttps). That is, when you call the API through an HTTPS endpoint, Amazon API Gateway invokes the Lambda function.AWS_PROXY integration type.ANY catch-all method.{proxy+}).outputType handler-name(inputType input, Context context) {
…
}
import com.amazonaws.services.lambda.runtime.Context;
import com.amazonaws.services.lambda.runtime.RequestHandler;
public class HelloPojo implements RequestHandler<RequestClass, ResponseClass>{
public ResponseClass handleRequest(RequestClass request, Context context){
String greetingString = String.format("Hello %s, %s.", request.firstName, request.lastName);
return new ResponseClass(greetingString);
}
}
Amazon CloudFront is a fast content delivery network (CDN) service that securely delivers data, videos, applications, and APIs to customers globally with low latency, high transfer speeds, all within a developer-friendly environment. CloudFront is integrated with AWS – both physical locations that are directly connected to the AWS global infrastructure, as well as other AWS services. CloudFront works seamlessly with services including AWS Shield for DDoS mitigation, Amazon S3, Elastic Load Balancing or Amazon EC2 as origins for your applications, and Lambda@Edge to run custom code closer to customers’ users and to customize the user experience. Lastly, if you use AWS origins such as Amazon S3, Amazon EC2 or Elastic Load Balancing, you don’t pay for any data transferred between these services and CloudFront.
Amazon CloudFront is a web service that speeds up distribution of your static and dynamic web content, such as .html, .css, .js, and image files, to your users. CloudFront delivers your content through a worldwide network of data centers called edge locations. When a user requests content that you’re serving with CloudFront, the user is routed to the edge location that provides the lowest latency (time delay), so that content is delivered with the best possible performance.

Use cases:

When you deploy resources to a S3 bucket that uses Cloudfront for distribution, you need to invalidate those resources so they can reflect new changes.
Use CLI
aws cloudfront –profile awsProfile create-invalidation –distribution-id distribution_id –paths “/*”
Example: aws cloudfront –profile folauk110 create-invalidation –distribution-id 123321test –paths “/*”
AWS Cloudfront Developer Guide
Amazon Simple Storage Service (Amazon S3) is an object storage service that offers industry-leading scalability, data availability, security, and performance. This means customers of all sizes and industries can use it to store and protect any amount of data for a range of use cases, such as websites, mobile applications, backup and restore, archive, enterprise applications, IoT devices, and big data analytics. Amazon S3 provides easy-to-use management features so you can organize your data and configure finely-tuned access controls to meet your specific business, organizational, and compliance requirements. Amazon S3 is designed for 99.999999999% (11 9’s) of durability, and stores data for millions of applications for companies all around the world.
Use cases:
Overview of S3
Amazon S3 has a simple web services interface that you can use to store and retrieve any amount of data, at any time, from anywhere on the web.
S3 Bucket
S3 Bucket Restrictions and Limitations
Rules for bucket naming
When you use server-side encryption, Amazon S3 encrypts an object before saving it to disk in its data centers and decrypts it when you download the objects.
Object key (or key name) uniquely identifies the object in a bucket. Object metadata is a set of name-value pairs. You can set object metadata at the time you upload it. After you upload the object, you cannot modify object metadata. The only way to modify object metadata is to make a copy of the object and set the metadata.
The Amazon S3 data model is a flat structure: you create a bucket, and the bucket stores objects. There is no hierarchy of sub buckets or subfolders. However, you can infer logical hierarchy using key name prefixes and delimiters as the Amazon S3 console does. The Amazon S3 console supports a concept of folders.
Amazon S3 supports buckets and objects, and there is no hierarchy in Amazon S3. However, the prefixes and delimiters in an object key name enable the Amazon S3 console and the AWS SDKs to infer hierarchy and introduce the concept of folders.
System Metadata – For each object stored in a bucket, Amazon S3 maintains a set of system metadata. Amazon S3 processes this system metadata as needed. For example, Amazon S3 maintains object creation date and size metadata and uses this information as part of object management.
User-defined Metadata – When uploading an object, you can also assign metadata to the object.

AmazonS3 s3Client = AmazonS3ClientBuilder.standard()
.withCredentials(new ProfileCredentialsProvider())
.withRegion(clientRegion)
.build();
if (!s3Client.doesBucketExistV2(bucketName)) {
// Because the CreateBucketRequest object doesn't specify a region, the
// bucket is created in the region specified in the client.
s3Client.createBucket(new CreateBucketRequest(bucketName));
// Verify that the bucket was created by retrieving it and checking its location.
String bucketLocation = s3Client.getBucketLocation(new GetBucketLocationRequest(bucketName));
System.out.println("Bucket location: " + bucketLocation);
}
AmazonS3 s3Client = AmazonS3ClientBuilder.standard()
.withRegion(clientRegion)
.build();
// Upload a text string as a new object.
s3Client.putObject(bucketName, stringObjKeyName, "Uploaded String Object");
// Upload a file as a new object with ContentType and title specified.
PutObjectRequest request = new PutObjectRequest(bucketName, fileObjKeyName, new File(fileName));
ObjectMetadata metadata = new ObjectMetadata();
metadata.setContentType("plain/text");
metadata.addUserMetadata("x-amz-meta-title", "someTitle");
request.setMetadata(metadata);
s3Client.putObject(request);
You need to santize your key names because they must be presentable as urls. Read here about that.
So I have a method to strip out invalid characters from file name because creating a key off of it.
public static String replaceInvalidCharacters(String fileName) {
/**
* Valid characters<br>
* alphabets a-z <br>
* digits 0-9 <br>
* underscore _ <br>
* dash - <br>
*
*/
String alphaAndDigits = "[^a-zA-Z0-9._-]+";
// remove invalid characters
String newFileName = fileName.replaceAll(alphaAndDigits, "");
return newFileName;
}
AmazonS3 s3Client = AmazonS3ClientBuilder.standard()
.withRegion(clientRegion)
.withCredentials(new ProfileCredentialsProvider())
.build();
// Get an object and print its contents.
System.out.println("Downloading an object");
fullObject = s3Client.getObject(new GetObjectRequest(bucketName, key));
System.out.println("Content-Type: " + fullObject.getObjectMetadata().getContentType());
System.out.println("Content: ");
displayTextInputStream(fullObject.getObjectContent());
// Get a range of bytes from an object and print the bytes.
GetObjectRequest rangeObjectRequest = new GetObjectRequest(bucketName, key)
.withRange(0, 9);
objectPortion = s3Client.getObject(rangeObjectRequest);
System.out.println("Printing bytes retrieved.");
displayTextInputStream(objectPortion.getObjectContent());
// Get an entire object, overriding the specified response headers, and print the object's content.
ResponseHeaderOverrides headerOverrides = new ResponseHeaderOverrides()
.withCacheControl("No-cache")
.withContentDisposition("attachment; filename=example.txt");
GetObjectRequest getObjectRequestHeaderOverride = new GetObjectRequest(bucketName, key)
.withResponseHeaders(headerOverrides);
headerOverrideObject = s3Client.getObject(getObjectRequestHeaderOverride);
displayTextInputStream(headerOverrideObject.getObjectContent());
AmazonS3 s3Client = AmazonS3ClientBuilder.standard()
.withRegion(clientRegion)
.withCredentials(new ProfileCredentialsProvider())
.build();
// Set the presigned URL to expire after one hour.
java.util.Date expiration = new java.util.Date();
long expTimeMillis = expiration.getTime();
expTimeMillis += 1000 * 60 * 60;
expiration.setTime(expTimeMillis);
// Generate the presigned URL.
System.out.println("Generating pre-signed URL.");
GeneratePresignedUrlRequest generatePresignedUrlRequest =
new GeneratePresignedUrlRequest(bucketName, objectKey)
.withMethod(HttpMethod.GET)
.withExpiration(expiration);
URL url = s3Client.generatePresignedUrl(generatePresignedUrlRequest);
System.out.println("Pre-Signed URL: " + url.toString());
AmazonS3 s3Client = AmazonS3ClientBuilder.standard()
.withRegion(clientRegion)
.withCredentials(new ProfileCredentialsProvider())
.build();
// Set the presigned URL to expire after one hour.
java.util.Date expiration = new java.util.Date();
long expTimeMillis = expiration.getTime();
expTimeMillis += 1000 * 60 * 60;
expiration.setTime(expTimeMillis);
// Generate the presigned URL.
System.out.println("Generating pre-signed URL.");
GeneratePresignedUrlRequest generatePresignedUrlRequest =
new GeneratePresignedUrlRequest(bucketName, objectKey)
.withMethod(HttpMethod.GET)
.withExpiration(expiration);
URL url = s3Client.generatePresignedUrl(generatePresignedUrlRequest);
System.out.println("Pre-Signed URL: " + url.toString());
How to create a S3 bucket from AWS CLI
aws s3 mb s3://mybucket
aws s3 cp test.txt s3://mybucket/test2.txt --expires 2014-10-01T20:30:00Z
aws s3 mv test.txt s3://mybucket/test2.txt // upload all contents within the current directory to mybucket aws s3 mv . s3://mybucket
Download files from a S3 bucket
aws s3 mv s3://mybucket/test.txt test2.txt //download all files in mybucket to a local director(local_mybucket) aws s3 mv s3://mybucket local_mybucket --recursive
Sync files
// sync(upload all files) within the current directory to S3 mybucket aws s3 sync s3://mybucket . // sync mybucket to mybucket2 aws s3 sync s3://mybucket s3://mybucket2 // download all content of mybucket to the current directory aws s3 sync s3://mybucket . --recursive //any files existing in the local directory but not existing in bucket will be deleted. aws s3 sync . s3://mybucket --delete // all files matching the pattern existing both in s3 and locally will be excluded from the sync. aws s3 sync . s3://mybucket --exclude "*.jpg"
List s3 buckets within your account
aws s3api list-buckets //The query option filters the output of list-buckets down to only the bucket names. aws s3api list-buckets --query "Buckets[].Name"
aws s3api list-objects --bucket bucketName // get objects that start with (--prefix) aws s3api list-objects --bucket sidecarhealth-dev-file-form --prefix prefixValue
Amazon Elasticache is an in-memory data store. It also works as a cache data store to support the most demanding applications requiring sub-millisecond response times. You no longer need to perform management tasks such as hardware provisioning, software patching, setup, configuration, monitoring, failure recovery, and backups. ElastiCache continuously monitors your clusters to keep your workloads up and running so that you can focus on higher-value application development.
Elasticache sits between your application and your database. When your application needs to query the database, it first checks with Elasticache, if the data is there then it will return that data. if the data is not in Elasticache then it will query the database. This increases the performance of your application significantly.

Amazon Elasticache Redis
Features to enhance reliability
Use cases of when to use Elasticache
What should I cache?
Consider caching your data if the following is true: It is slow or expensive to acquire when compared to cache retrieval. It is accessed with sufficient frequency. It is relatively static, or if rapidly changing, staleness is not a significant issue.

Choose Memcached if:
Choose Redis if:

Caching Strategies
Lazy Loading –Lazy loading is a caching strategy that loads data into the cache only when necessary. How it works is that ElastiCache is an in-memory key/value store that sits between your application and the data store (database) that it accesses. Whenever your application requests data, it first makes the request to the ElastiCache cache. If the data exists in the cache and is current, ElastiCache returns the data to your application. If the data does not exist in the cache, or the data in the cache has expired, your application requests the data from your data store which returns the data to your application. Your application then writes the data received from the store to the cache so it can be more quickly retrieved next time it is requested.

Advantages of Lazy Loading
Disadvantages of Lazy Loading
Write Through – The write-through strategy adds data or updates data in the cache whenever data is written to the database.
Advantages of Write Through
Disadvantages of Write Through
Adding TTL Stragegy – Lazy loading allows for stale data, but won’t fail with empty nodes. Write through ensures that data is always fresh, but may fail with empty nodes and may populate the cache with superfluous data. By adding a time to live (TTL) value to each write, we are able to enjoy the advantages of each strategy and largely avoid cluttering up the cache with superfluous data. Time to live (TTL) is an integer value that specifies the number of seconds (Redis can specify seconds or milliseconds) until the key expires. When an application attempts to read an expired key, it is treated as though the key is not found, meaning that the database is queried for the key and the cache is updated. This does not guarantee that a value is not stale, but it keeps data from getting too stale and requires that values in the cache are occasionally refreshed from the database.
Amazon ElastiCache for Memcached is a Memcached-compatible in-memory key-value store service that can be used as a cache or a data store. It delivers the performance, ease-of-use, and simplicity of Memcached. ElastiCache for Memcached is fully managed, scalable, and secure – making it an ideal candidate for use cases where frequently accessed data must be in-memory. It is a popular choice for use cases such as Web, Mobile Apps, Gaming, Ad-Tech, and E-Commerce.
Amazon ElastiCache for Memcached is a great choice for implementing an in-memory cache to decrease access latency, increase throughput, and ease the load off your relational or NoSQL database. Amazon ElastiCache can serve frequently requested items at sub-millisecond response times , and enables you to easily scale for higher loads without growing the costlier backend database layer. Database query results caching, persistent session caching, and full-page caching are all popular examples of caching with ElastiCache for Memcached.
Session stores are easy to create with Amazon ElastiCache for Memcached. Simply use the Memcached hash table, that can be distributed across multiple nodes. Scaling the session store is as easy as adding a node and updating the clients to take advantage of the new node.
Features to enhance reliability: