AWS Re:Invent 2019 - Day 3
Four sessions in one day

Day #3

The fact that AWS Re:Invent is spread out over 5 separate casinos, makes it really challenging to get to the content you want to see. The alternatives are either book sessions only in 1 hotel, which is fine if all the sessions you want to see are there, and you can get reserve seating, or wear yourself out treking between casinos using either the monorail, the conference shuttles, or walking… I choose the latter. First session at the Mirage, then off to the Aria for second and third session (with a quick bite to eat in between), then the MGM Grand for the final session of the day. Unfortunately, my laptop ran out of battery in the middle of the third session, so my summary of the last 2 sessions will be from memory rather than well structured notes.

Building AWS IoT applications using .Net (Chalk talk)

Knowing that our mission at Telstra Purple is to become a sort after partner and leader in IoT services, I felt it relevant to checkout what that means for a .Net developer. Chalk talks are a less formal, more interactive session. The two speakers did a very quick run through of the basics of IoT, and AWS’s service AWS IoT Core. They then discussed how many large enterprises, who have invested heavily in .Net are wanting to leverage that investment when moving into IoT. They also spoke about the 2 seperate ways of authenticating IoT devices, X.509 certificates and SigV4 signed requests to AWS. They then discussed the various protocol options when using IoT on AWS; MQTT, HTTP, and Websockets over MQTT. They admitted that currently there is no device SDK for C#, they seemed to suggest that they were working on one, but the code seemed simple enough even without a dedicated device SDK. This seems to be a common theme at AWS, python and NodeJS get a lot of love, and .Net… not so much.

Building modern APIs with GraphQL (Session)

It’s hard to believe that it’s 2019, and while have heard of GraphQL, I have not, until now, really taken a good look at it, and understood its value proposition. This session was pretty much just an introduction to GraphQL. Key takeaways:

Scalable serverless event-driven applications using Amazon SQS and Lambda

Since AWS announced the adition of SQS as a source for Lambda, it has become the most popular way of calling lmabda. This is not surprising as there are numerous advantages to using an event broker style pattern in your applications. Add to this the ability for API Gateway to directly write to SQS, and the story is very compelling if you are able to write your API to be asynchronous. The talk discussed how pollers are provisioned to deliver messages to your lambda, and how messages can be batched to improve the performance of your lambdas logic.

Serverless APIs at Scale with AWS (Session)

The final session of the day was around running serverless at truly huge scale. Most people have swallowed whole the serverless mantra about never having to worry about your scaling strategy because lambda will continue to scale as your load increases. This is true… to a point, and it is interesting to know where that point is and why, and how issues manifest. For many serverless applications, these limits may never be reached, especially not in Dev where it is often hard to generate as much load as you might get in production. Moral of the story is, it is still important to load test your serverless applications.

First point that you may start to see issues is the Lambda concurrency limit. This is set to 1000 concurrent invocations per account per region. It’s important to note that this is across ALL lambdas, not for eachindividual lambdas. If you are running 100 different lambdas, then it isn’t hard to see how this limit might be hit unexpectedly. It is also important to keep in mind the burst limits as well, especially if you have spikey loads. These limits are what AWS like to call “soft” limits. It is possible to request these limits to be increased.

The next issue mentioned was that of other AWS services having throttling limits. For example, if you are getting connection strings, or passwords from AWS Secrets Manager, it is important to know that this service has a request limit of 1000/second. If you are calling this service on every invocation of your lambda, then it will throttle you if you go beyond that limit. The interesting thing that happens here is that because AWS APIs are designed to handle throttling errors, they will automatically perform retries with back-off. This means that rather than seeing errors, it will likely exhibit as an increase in latency for your lambda. The solution to this dilemma is to declare things like clients to AWS services, and database connections, outside the handler code. This means they will be created on lamda startup, and available for subsequent requests whenever that instance is re-used. In C# this is equivalent to declaring objects as static.

Another potential problem are downstream services (such as a relational database, or another http based service), that does not scale infinitely as lambda (sort of) does. At this point the downstream service becomes a bottleneck for your API. The main resolution is to attempt to manage the concurrency by making the API calls asynchronous. The talk discussed a few different options for making the API asynchronous, including simply adding it to an SQS queue for fire and forget APIs, or adding a polling mechanism if a response is required.

Telco and Media ANZ dinner

Much of the benefit of attending Re:Invent is in the connections you make and the conversations you have with AWS people, and other customers/partners. Dinners like this are a great way to meet the AWS people responsible for looking after your market segment, and to relate some challenges to them, and work through solutions. It was also a great opportunity to try some amazing Japanese cuisine at one of the Venetian’s restaurants.

*****
Written by Scott Baldwin on 04 December 2019