Cloud-Native Infrastructure

AWS Architecture Overview

Aimazing is an AI-powered dermatology analysis platform built entirely on Amazon Web Services (AWS). The platform processes dermatology images using machine learning models and scalable cloud infrastructure.

AWS Is the Infrastructure Foundation of Aimazing

Building a clinical AI product requires infrastructure that meets specific requirements that most general-purpose cloud platforms cannot satisfy out of the box. Aimazing chose Amazon Web Services as its sole cloud provider from inception because AWS uniquely meets all of these product requirements.

AWS provides managed AI/ML services (SageMaker), globally replicated object storage (S3), managed relational databases (RDS), compliant network isolation (VPC), and the identity and access management (IAM) controls necessary for clinical data governance — all under one unified platform.

AWS is not optional for this product. Amazon SageMaker is used directly for AI model hosting and inference. Replacing it would require rebuilding significant parts of the ML serving pipeline. AWS credits are critical to funding compute and storage costs while the product reaches scale.
AWS Architecture — Aimazing Platform
Presentation
🌐 Browser Client
☁️ AWS CloudFront
⚖️ AWS Load Balancer

Application
🖥️ Amazon EC2
⚡ AWS Lambda

AI Inference
🧠 Amazon SageMaker
↑↓
🖼️ Amazon S3

Data Layer
🗄️ Amazon RDS
+
💾 Amazon S3

Security
🔑 AWS IAM
+
🔒 AWS VPC
+
🛡️ AWS KMS

AWS Cloud Infrastructure

Aimazing is built on Amazon Web Services (AWS) infrastructure to support scalable AI image analysis for dermatology clinics.

  • Amazon EC2 – Application compute infrastructure
  • Amazon S3 – Secure dermatology image storage
  • Amazon SageMaker – Machine learning inference
  • Amazon RDS – Clinical data and patient metadata
  • AWS IAM – Access control and security
AWS

Amazon SageMaker

ML Inference Model Hosting Auto Scaling

Amazon SageMaker hosts the core AI model responsible for skin condition classification and severity scoring. SageMaker endpoints provide managed deployment, automatic scaling, and low-latency inference. The model was trained on clinical image datasets and deployed as a real-time inference endpoint. SageMaker eliminates the need to manage ML serving infrastructure directly, which is essential for a small engineering team.

AWS

Amazon S3

Image Storage Object Storage Encrypted

Amazon S3 stores all patient skin images and generated PDF reports. Each object is stored with AES-256 server-side encryption and access is restricted using S3 Bucket Policies and IAM role assignments. S3's 99.999999999% durability and lifecycle policies allow cost-efficient tiering of older patient records to Glacier without losing data availability.

AWS

Amazon RDS (PostgreSQL)

Relational DB Patient Records Managed

Amazon RDS with PostgreSQL stores all structured application data — patient records, analysis results, clinical notes, user accounts, and clinic configurations. Multi-AZ deployment ensures high availability across the region. Automated daily backups with point-in-time recovery are enabled by default. RDS managed service removes the operational overhead of database patching, failover, and backups.

AWS

Amazon EC2

Web Application API Server Auto Scaling

The web application server and REST API backend run on Amazon EC2 instances within an Auto Scaling Group. During periods of high clinic traffic — typically morning clinic hours — additional EC2 instances are provisioned automatically. This ensures consistent response times without over-provisioning infrastructure during off-peak periods, which is directly relevant to startup cost management.

AWS

AWS Lambda

Event Processing Serverless PDF Generation

AWS Lambda handles event-driven tasks including image preprocessing before SageMaker inference, PDF report generation after analysis completion, and asynchronous notification delivery. Lambda functions are triggered by S3 upload events and API Gateway requests. Serverless execution means these functions only incur cost when invoked, which is optimal for startup-stage compute budgets.

AWS

AWS IAM & VPC

Access Control Network Isolation Compliance

AWS Identity and Access Management (IAM) enforces least-privilege access across all AWS resources. Application roles, developer roles, and CI/CD pipelines each have scoped IAM policies. The entire application runs within an Amazon VPC with private subnets for the database and AI inference layers. Public subnets host only the load balancer. VPC Security Groups restrict traffic to defined port and protocol rules.

End-to-End Request Flow on AWS

How a single patient image submission travels through the AWS infrastructure.

1

Clinician uploads image via HTTPS

The browser client sends a secure POST request to the API server on Amazon EC2 via an AWS Application Load Balancer. TLS terminates at the load balancer. CloudFront CDN serves static assets globally.

2

Image stored to Amazon S3

The EC2 API server writes the raw image to a dedicated S3 bucket with server-side encryption (SSE-S3). A pre-signed URL is stored in RDS referencing the image object key. An S3 event triggers an AWS Lambda preprocessing function.

3

Lambda preprocesses and calls SageMaker

The Lambda function resizes and normalises the image, then submits it to a real-time Amazon SageMaker endpoint. The endpoint runs the trained CNN inference model and returns structured JSON containing condition classification, severity score, and region coordinates.

4

Results written to Amazon RDS

The Lambda function writes the structured AI output to the patient record in Amazon RDS (PostgreSQL). All database connections are established from within the VPC private subnet. RDS is not exposed to the public internet.

5

Clinician reviews results in the dashboard

The EC2 API server retrieves the completed analysis from RDS and returns it to the browser client. The clinician reviews findings, adds notes, and optionally triggers PDF report generation via a second Lambda invocation. The final PDF is written to S3 and a download link returned.

Why AWS Credits Are Critical to Aimazing's Scale

Aimazing is currently preparing pilot deployments with dermatology clinics that will use the platform for AI-assisted skin analysis and treatment tracking.

At the pre-revenue stage, AWS infrastructure costs — particularly for SageMaker endpoints and RDS instances — represent the primary operating expense for the product.

Aimazing's core product value depends entirely on AI inference running on Amazon SageMaker. A managed SageMaker endpoint capable of handling real-time clinical inference requires continuous uptime during clinic hours, incurring measurable cost before the company generates subscription revenue.

AWS credits directly offset these pre-revenue infrastructure costs, allowing Aimazing to maintain production-grade infrastructure, onboard pilot clinics at no cost to them, and iterate on the product without being constrained by infrastructure spend.

As clinic subscriptions grow, the platform's architecture scales naturally — more EC2 capacity via Auto Scaling, additional SageMaker endpoint variants for new model versions, and higher S3 throughput for image volume. AWS credits allow the team to build and validate this scaling capability during the pilot phase.

AWS is the only viable infrastructure provider for this product. Migrating the SageMaker-based inference pipeline to another provider would require significant engineering investment and loss of managed ML tooling that the small founding team depends on. This is an AWS-native product by design.

Projected Monthly AWS Costs — Early Stage

AWS Service Use Case Est. Monthly
Amazon SageMaker ml.m5.large inference endpoint × 2 ~$180
Amazon EC2 t3.medium × 2, Auto Scaling Group ~$60
Amazon RDS db.t3.medium, Multi-AZ PostgreSQL ~$80
Amazon S3 Image storage + PDF reports ~$25
AWS Lambda Image preprocessing + PDF gen ~$10
AWS CloudFront Static asset delivery + SSL ~$12
AWS IAM / VPC / KMS Security, key management, networking ~$8
Total Estimated ~$375 / month

Estimates based on AWS public pricing for ap-southeast-1 region. Actual costs depend on clinic traffic volume.

Security Controls Across the AWS Stack

Encryption at Rest

All S3 objects are encrypted with SSE-S3 (AES-256). RDS storage is encrypted using AWS KMS-managed keys. Encryption is enforced by bucket and RDS instance configuration, not application code.

Encryption in Transit

All traffic between clients and the platform traverses HTTPS/TLS 1.2+. Internal service-to-service communication within the VPC uses SSL-enforced connections to RDS and signed HTTPS requests to SageMaker endpoints.

Network Isolation

The RDS database and SageMaker endpoints are deployed in private subnets with no public internet access. VPC Security Groups allow only application-tier instances to communicate with the data layer.

Least-Privilege IAM

Each application component operates with a scoped IAM role that grants only the permissions required for its function. No component uses root credentials or overly broad IAM policies.

Audit Logging

AWS CloudTrail records all API calls across the AWS account. Application-level access logs are stored in CloudWatch Logs. Patient record access events are logged in the application database.

Backup & Recovery

Amazon RDS automated backups run daily with 35-day retention and point-in-time recovery enabled. S3 versioning is enabled on the images bucket. Multi-AZ RDS deployment provides automatic failover in the event of an instance failure.

Interested in the Technical Details?

Our team is available for technical discussions with potential clinic partners, investors, and cloud programme reviewers.

Get in Touch