.NET Technical Lead
As a Tech Lead, I ensure that .NET developers push quality
code to Production
ASP.NET Core
Project Templates
Blazor / Angular
Identity Server, OAuth2, OpenID Connect, JWT
Resilience / Health
Entity Framework, Dapper, Marten
Dependency Injection / configuration
SeriLog / Fluentd / Kibana
OpenTelemetry
xUnit, NUnit, MS Test
Active Directory, LDAP Integration
Micro-services
Docker, Kubernetes, Helm
EKS / Rancher
ServiceFabric
DevOps
CI/CD & GitOps
Azure Pipelines
GitHub Actions
Front-end Development
Modern Angular / TypeScript
Project Templates / Code Generators
Tailwind CSS / Material Design
End to End test automation
Database Development
Deep experience in SQL Server and PostgreSQL
Database design
Database source control
MongoDB high throughput
DR/HA configuration
Redis Caching Strategies
Database indexing and performance tuning
S3 Data Lake architecture
Event Driven Architecture
Apache Kafka / RabbitMQ
I also have Java skills
Handy when .NET and Java teams integrate on larger projects.
Spring Boot / MVC / IOC
SOAP / REST Web Services / RetroFit
JPA / Hibernate
JUnit
Log4J
LDAP / Active Directory Integration
Kafka Streams
Professional Profile
I started
programming at the age of 14, and received a
national award in the same year.
Today I am a customer focussed Technical Lead and
Solutions Architect at Absa CIB in Cape Town. I have a passion
for
scalable, maintainable and well-architected solutions
that deliver business value early on. The detail is
important to me.
The
developer productivity of my team is very important to
me. For any given project, I lay down the rails so that they can
follow the path to success in a natural and intuitive way. These
often include project templates and
code generators. Who wants to write front-end DTOs by
hand if a code generator on the back-end could do it in a few
seconds? And who starts from scratch if a project template could
generate a starter kit, including the back-end, front-end and
CI/CD pipeline yaml?
As a
full stack software engineer, I specialise in modern tech
stacks. My efforts are focused on .NET and Angular LTS versions.
I sometimes use Blazor Server and WASM when a project has a
shorter deadline. I find the developer productivity
exponentially more rewarding for certain projects. I believe in
evergreen architectures and prefer to upgrade the early in the
release cycle. I have in-depth production experience in
micro-service architectures on Kubernetes, both on-premise and
in the Cloud. I love Kubernetes because it's one of the most
portable technologies on the planet. Migrating Kubernetes
applications between on-premise and Cloud data centers is
usually painless. I have extensive production experience in
Docker, Kubernetes, Helm, Rancher and AWS EKS. With 8 years of
experience with corporate bank-level security, I excel in
zero-trust architectures, end-to-end encryption and
cyber-security tooling for static and dynamic analysis of code
bases on a macro level. I prefer using SonarQube, Trivy and
AquaSec where suitable.
My solutions
are secure by default, with multiple layers of security
for defense in depth. I work with OAuth2 and OpenID Connect on a
daily basis and ensure that all API endpoints are secure,
whether they're exposed to the public internet or running in an
isolated environment. Atackers find it hard to move laterally in
such a zero-trust environment. Observability is key.
That's why OpenTelemetry and Serilog is always part of the
project template. I have a solid understanding of the .NET
ecosystem, such as dependency injection and the configuration
system. I enjoy authoring internal NuGet libraries to help
tenant teams to consume our platform services to stay productive
and focus on their business features. xUnit is my framework of
choice for unit testing on the back-end. I constantly develop
and update guidelines and patterns to improve test coverage. In
a micro-service there are added benefits to testing the full API
surface area, and to reduce brittle mock-based testing. These
techniques have led to some of the
most stable micro-services in the bank. My focus is
primarily on the latest versions of .NET Core, and I am always
up to date with the latest trends and best practices.
In a
microservice environment Event Driven Architecture (EDA)
decouples the services from one another. Some processes don't
require synchronous communication and can be processed later in
an asynchronous manner. This is where I apply my RabbitMQ and
Kafka experience. Since Kafka is a highly scalable and reliable
platform, my focus has shifted there over the last 5 years. My
Kafka Producers and Consumers deliver
high-throughput messages on critical data pipelines. I
sometimes cross over to Java when I need to develop bespoke
Kafka Streams applications to join data between existing topics.
I'm also learning how to build massively scalable data pipelines
with Apache Flink with FlinkSQL.
For
operational data stores I standardise on
PostgreSQL database technologies where possible. I find
that the extensible ecosystem very useful. The PostgreSQL
community is at the forefront in many areas of the industry. I
use Entity Framework and Dapper for relational
data and native JSONB or MartenDB for NoSQL use
cases. Database schemas and even test data is checked into
source control and deployed to the target environments, reducing
click-ops and scaling productivity. I also have production
experience with Microsoft SQL Server, MongoDB and
Redis caches, making performance improvements for
different types of workloads through indexing and adjusting
cluster-specific parameters. Each workload is different.
Performance tuning is part of delivering the solution to our
customers. I often fulfill the role of DBA for our team,
knowing that there are other specialists I can reach out to when
needed.
For data
analytics I prefer to standardise on the Iceberg format
in an S3-compatible data lake. When developing data
analytics solutions in AWS, the existing S3, Glue and Athena
services are the primary bulding blocks. When regulatory
requirements demand an on-premise solution, the same is achieved
with S3-compatible storage such as Dell ECS, Apache Hive
MetaStore and Starburst Trino. I have extensive experience in
delivering high-value data pipelines, from raw sources,
to standardised layers and curated data assets, whether in
real-time or through batch processing. I dabble in Apache Spark
and Scala where needed, but prefer ANSI SQL transformations
where possible.
As a
well-rounded solutions architect I believe in the principle of
micro-service ownership. A team should own their
application from concept all the way to production. That is why
I spend a lot of time building out the Site Reliablity
Engineering (SRE) principles in my teams.
Infrastructure-as-Code (IaC) and CI/CD automation
are key enablers to reduce click-ops and manual toil in software
engineering teams. I love reliable and repeatable outcomes.
That's why I invest time in setting up development, staging and
production environments with IaC technologies such as
Terraform. I train my top engineers to contribute to the
IaC code-base by writing reusable modules that are easily
configured for any environment. A solid understanding of
Terraform State is part of the training. Not only do we build
out our AWS infrastructure with Terraform, but we also use it to
build out our Cisco ThousandEyes observability dashboards.
My SRE
mindset enabled me to build up extensive production experience
in designing and developing
observability and alerting systems
for our application platform. My dashboards highlight serious
and even preventative health checks for applications, databases,
infrastructure, network througput and advance notice on TLS
certificate expiry. It also continuously monitors advanced
scenarios like WAF and mTLS traffic. I always use the tool
that's best for the use case, whether it's Cisco ThousandEyes,
IBM Instana, AWS CloudWatch Alarms, Grafana or a custom .NET
solution. Sometimes a combination of these toolsets deliver the
best coverage.
I
particularly enjoy the principle of continuous improvement. Our
initial CI/CD processes were designed to migrate us from a
server architecture to a micro-service architecture. I designed
our Azure DevOps build pipelines to form the
rails for other teams in the organisation to seamlessly
and reliably deploy their applications to shared environments.
Over the years my team and I have built shared steps and
standardised deployment pipelines for others to use. This year
alone we rolled out hundreds of changes to production with
minimal downtime. We also developed custom build agents
with tooling pre-installed to accelerate the build and
deployment processes. Our CI/CD pipelines also include
automated DB schema migrations and indexing from source
control, as well as static security scanning with the
AquaSec tooling. Docker and AWS AMI patching happens
independently of application code changes. Our teams often patch
a given image at a moment's notice with zero downtime or
customer impact. For applications running on AWS EC2 instances,
we use AutoScaling Groups and AWS CodeDeploy to seamlessly
upgrade the OS or the applications on the instances. Our
developers don't have to know about all the moving parts. They
simply run the release process in Azure DevOps.
I am an
AWS Certified Solutions Architect who enjoys delivering
scalable and maintainable solutions for our customers. Financial
institutions demand a higher degree of security. Over the years
I partnered with security and Cloud architects in the bank to
implement secure ingress patterns into our application
environments, using AWS services like Cloudfront, AWS WAF and
AWS Shield. Furthermore we enabled our high-value customers to
add an additional layer of security via mTLS client
certificates. Other AWS services I use on a daily basis include:
Route53, IAM Roles and Permissions, IAM RolesAnywhere (for
on-premise workloads), RDS PostgreSQL, ElastiCache Redis /
Valkey, EC2 Amazon Linux 2023, EKS, ECR, SSM Parameter Store and
Secrets Manager. I also experiment with serverless patterns such
as Lambda, Fargate, API Gateway, S3, DynamoDB, SQS and RDS
Postgres Serverless services when a valid use case surfaces.
I believe in mentoring and
continuous training. For this reason I set up an AWS
Sandbox account where developers can freely experiment with new
technologies, including AWS Bedrock LLMs. After manually testing
out a pattern in the Sandbox account, I encourage and train them
to import the existing AWS resources into Terraform to
accellerate an IaC driven infrastructure pipeline. My Terraform
skills also enable me to publish modular and reusable
pre-approved AWS Service Catalog Products that other teams in
the organisation can use for their scenarios.
I'm very
comfortable with bash when I develop Amazon Linux
bootstrap scripts. Similarly I use Powershell when
bootstrapping Windows EC2 instances. My solutions include custom
instances for processing ingress traffic, terminating mTLS
traffic for advanced security use cases. I'm well versed in the
ModSecurity WAF solutions, both for Nginx and Apache web
servers. The Core RuleSets are often customised for bespoke
business scenarios.
In my role
as Technical Lead I often gravitate to the realm of
Platform Architecture, where micro-services are
categorised into tiers of criticality to reliably deliver a set
of business capabilities, even when there's a degree of service
degredation on the platform. Such platforms often contain
identity providers, permission systems, internationalization,
notifications, workflows, audit trails and more. I have
extensive experience in all of these business domains and excel
in evolving the code-bases in an isolated manner to reduce
impact on consuming services.
As a
Technical Lead I'm comfortable leading a team of strong software
engineers. I also enjoy mentoring and
upskilling junior and intermediate developers. I really
enjoy the career development aspect when I shape a team.
I'm actively involved in the recruitment process by interviewing
.NET developer, Angular developers and DevOps engineers. Leading
a team in the office or remotely makes no difference to me. I
have learned to manage technical teams of diverse nationalities,
genders, beliefs and orientation. I enjoy forming human
connections with my team members, and encourage them to bond as a
team. Beware! I am an elusive office prankster and I love to
make things fun for my team members. I use Sprint Retros as an
opportunity to remove any obstacles from the team and to forge
unity of vision for the next project.
As a
certified Kanban practitioner I periodically analyse the flow of
work items to see how we can improve as a team during the next
iteration. I prefer the Nave analytics tool for this.
I charge a
little more because I invest a lot of time in sharpening my
skills and staying up to date with the latest technologies. As a generalist, I have a wide view of the technology landscape, and as a specialist, I do a deep dive into specific areas where I can add the most value. I make sure that I'm ahead of the curve, so that I can guide my team and organisation in the right direction. Not everyone can do this. As the saying goes...
“If you think hiring a professional is expensive, wait ‘til
you see what an amateur costs you”