Day 1 - First Day of Conference Content
RDS Telstra dedicated session
My first session was a special dedicated session targetted at Telstra and our use of RDS. It consisted of myself and 2 internal Telstra executives, and 8 people from the RDS team. We did a round of introductions, and then we they began to dive deep into some of the issues Telstra has. I had never been exposed to any of our internal applications, and so it was a real revelation to me to learn the extent of our data footprint. The sheer number of databases, including many different database engines and the enormous volume of the data we store. In retrospect, it shouldn’t have been that surprising. A Telco the size and age of Telstra would inevitably have many different applications and teams that maintain them. The real challenge as we move towards our goal of 90% of workloads running in the public cloud is inevitably the cost. The AWS team offered some suggestions, and promised to do a deep dive with the relevant engineering groups to understand the various challenges.
They did have one really big announcement for us that they had literally announced publicly minutes before the meeting. RDS will add DB2 to its list of supported database engines. This is great news for big enterprise clients that have data in DB2. As per usual, there are some caveats, for instance, while Java based stored procedures are supported, RDS won’t initially support Cobol stored procedures.
Meeting some of the Versent team
We were able to meet a few of the new Telstra acquisition, Versent. Talking to them confirms my feelings that we are capable of doing some pretty awesome things together in the future, and I’m personally looking forward to getting to know them better.
Build production-ready serverless .Net apps with AWS Lambda
My first proper session is a topic very dear to my heart as can be seen from my 3 part series (1, 2 3), on this very topic. I was keen to see if they had any more to add. Many of the things I mention in my series, they covered, e.g. Tracing, Logging, deploying etc… However, they did dive a bit into AWS Powertools for Lambda .Net, something that has been on my radar to start making use of. The speaker also dived into running Native AOT (Ahead of Time compilation), for scenarios where performance is critical, and cold starts need to be minimized. Of course there are some challenges and drawbacks that make it not as easy to develop (e.g. json serializers don’t work unless you use source generated serielizers).
Other things discussed that are not mentioned in my post are the use of Provisioned Concurrency which actually can work out cheaper so long as you’re utilizing more than 60% of the provisioned capacity.
Architecting resilient highly available .Net Workloads - Chalk Talk
Sticking with the .Net theme, next was a chalk Talk. On the whole, the conclusion I came away with was aside from some tooling around migrating your legacy .Net Framework apps to .Net core, and breaking big .Net monoliths into microservices, the patterns for High Availability and Resilience are very much language independent.
Concepts:
- Test your resilience posture (don’t assume anything)
- Automate everything
- Resiliency like security is a “Shared Responsibility” model between AWS and you.
- Use Managed Services where possible (e.g. RDS over SQL Server hosted on EC2)
- Retries, and graceful degradation
- Use Durable messages where possible (i.e. de-couple components using SQS)
Delivering low-latency applications at the edge
This session was essentially a re-cap on the capabilities and differences between AWS Local Zones and AWS Outposts.
Monday Night Live with Peter DeSantis
The Monday night keynote started off with a really good support band, and I felt really welcome when they played “I come from a Land Down Under”. A few key announcements from the keynote:
- Amazon Aurora Limitless Database: Autoscaling beyond 256GB database size of Aurora Serverless. To achieve this, AWS uses AI to automatically shards your database across instances. This could only be enabled with significant improvements in server clock synchronization in order to use “Wall Clock” to ensure time-ordered events across servers. AWS has created dedicated hardware in order to achieve this.
- Amazon Elasticache Serverless: I’ve been waiting for this for a long time, a serverless cache that monitors and scales in and out with your workloads requirements.
- Improvements to Redshift Serverless: The underlying virtualization engine (code named “Caspian”), that enables Aurora Serverless and now Elasticache serverless has been extended to fine tune the autoscaling capabilities of RedShift. Coupled with some AI analysis of queries, this means that a large query won’t slow everone else running simple (normally fast) queries.
I am a big fan of all things serverless, so this was a really exciting keynote for me. Of course the devil is always in the detail. AWS have for a while now been contributing to the mis-use of the word serverless, applying it to services that don’t scal to 0 (as a true serverless offering should), for example Aurora Serverless V2. Aurora Serverless V1 can scale to zero, a feature I make extensive use of to save money, but it does mean you have a significant cold start time. Of course “really good auto-scaling” doesn’t soun doesn’t sound anywhere near as good as serverless.