Building a clinical AI product requires infrastructure that meets specific requirements that most general-purpose cloud platforms cannot satisfy out of the box. Aimazing chose Amazon Web Services as its sole cloud provider from inception because AWS uniquely meets all of these product requirements.
AWS provides managed AI/ML services (SageMaker), globally replicated object storage (S3), managed relational databases (RDS), compliant network isolation (VPC), and the identity and access management (IAM) controls necessary for clinical data governance — all under one unified platform.
Aimazing is built on Amazon Web Services (AWS) infrastructure to support scalable AI image analysis for dermatology clinics.
Amazon SageMaker hosts the core AI model responsible for skin condition classification and severity scoring. SageMaker endpoints provide managed deployment, automatic scaling, and low-latency inference. The model was trained on clinical image datasets and deployed as a real-time inference endpoint. SageMaker eliminates the need to manage ML serving infrastructure directly, which is essential for a small engineering team.
Amazon S3 stores all patient skin images and generated PDF reports. Each object is stored with AES-256 server-side encryption and access is restricted using S3 Bucket Policies and IAM role assignments. S3's 99.999999999% durability and lifecycle policies allow cost-efficient tiering of older patient records to Glacier without losing data availability.
Amazon RDS with PostgreSQL stores all structured application data — patient records, analysis results, clinical notes, user accounts, and clinic configurations. Multi-AZ deployment ensures high availability across the region. Automated daily backups with point-in-time recovery are enabled by default. RDS managed service removes the operational overhead of database patching, failover, and backups.
The web application server and REST API backend run on Amazon EC2 instances within an Auto Scaling Group. During periods of high clinic traffic — typically morning clinic hours — additional EC2 instances are provisioned automatically. This ensures consistent response times without over-provisioning infrastructure during off-peak periods, which is directly relevant to startup cost management.
AWS Lambda handles event-driven tasks including image preprocessing before SageMaker inference, PDF report generation after analysis completion, and asynchronous notification delivery. Lambda functions are triggered by S3 upload events and API Gateway requests. Serverless execution means these functions only incur cost when invoked, which is optimal for startup-stage compute budgets.
AWS Identity and Access Management (IAM) enforces least-privilege access across all AWS resources. Application roles, developer roles, and CI/CD pipelines each have scoped IAM policies. The entire application runs within an Amazon VPC with private subnets for the database and AI inference layers. Public subnets host only the load balancer. VPC Security Groups restrict traffic to defined port and protocol rules.
How a single patient image submission travels through the AWS infrastructure.
The browser client sends a secure POST request to the API server on Amazon EC2 via an AWS Application Load Balancer. TLS terminates at the load balancer. CloudFront CDN serves static assets globally.
The EC2 API server writes the raw image to a dedicated S3 bucket with server-side encryption (SSE-S3). A pre-signed URL is stored in RDS referencing the image object key. An S3 event triggers an AWS Lambda preprocessing function.
The Lambda function resizes and normalises the image, then submits it to a real-time Amazon SageMaker endpoint. The endpoint runs the trained CNN inference model and returns structured JSON containing condition classification, severity score, and region coordinates.
The Lambda function writes the structured AI output to the patient record in Amazon RDS (PostgreSQL). All database connections are established from within the VPC private subnet. RDS is not exposed to the public internet.
The EC2 API server retrieves the completed analysis from RDS and returns it to the browser client. The clinician reviews findings, adds notes, and optionally triggers PDF report generation via a second Lambda invocation. The final PDF is written to S3 and a download link returned.
Aimazing is currently preparing pilot deployments with dermatology clinics that will use the platform for AI-assisted skin analysis and treatment tracking.
At the pre-revenue stage, AWS infrastructure costs — particularly for SageMaker endpoints and RDS instances — represent the primary operating expense for the product.
Aimazing's core product value depends entirely on AI inference running on Amazon SageMaker. A managed SageMaker endpoint capable of handling real-time clinical inference requires continuous uptime during clinic hours, incurring measurable cost before the company generates subscription revenue.
AWS credits directly offset these pre-revenue infrastructure costs, allowing Aimazing to maintain production-grade infrastructure, onboard pilot clinics at no cost to them, and iterate on the product without being constrained by infrastructure spend.
As clinic subscriptions grow, the platform's architecture scales naturally — more EC2 capacity via Auto Scaling, additional SageMaker endpoint variants for new model versions, and higher S3 throughput for image volume. AWS credits allow the team to build and validate this scaling capability during the pilot phase.
| AWS Service | Use Case | Est. Monthly |
|---|---|---|
| Amazon SageMaker | ml.m5.large inference endpoint × 2 | ~$180 |
| Amazon EC2 | t3.medium × 2, Auto Scaling Group | ~$60 |
| Amazon RDS | db.t3.medium, Multi-AZ PostgreSQL | ~$80 |
| Amazon S3 | Image storage + PDF reports | ~$25 |
| AWS Lambda | Image preprocessing + PDF gen | ~$10 |
| AWS CloudFront | Static asset delivery + SSL | ~$12 |
| AWS IAM / VPC / KMS | Security, key management, networking | ~$8 |
| Total Estimated | ~$375 / month |
Estimates based on AWS public pricing for ap-southeast-1 region. Actual costs depend on clinic traffic volume.
All S3 objects are encrypted with SSE-S3 (AES-256). RDS storage is encrypted using AWS KMS-managed keys. Encryption is enforced by bucket and RDS instance configuration, not application code.
All traffic between clients and the platform traverses HTTPS/TLS 1.2+. Internal service-to-service communication within the VPC uses SSL-enforced connections to RDS and signed HTTPS requests to SageMaker endpoints.
The RDS database and SageMaker endpoints are deployed in private subnets with no public internet access. VPC Security Groups allow only application-tier instances to communicate with the data layer.
Each application component operates with a scoped IAM role that grants only the permissions required for its function. No component uses root credentials or overly broad IAM policies.
AWS CloudTrail records all API calls across the AWS account. Application-level access logs are stored in CloudWatch Logs. Patient record access events are logged in the application database.
Amazon RDS automated backups run daily with 35-day retention and point-in-time recovery enabled. S3 versioning is enabled on the images bucket. Multi-AZ RDS deployment provides automatic failover in the event of an instance failure.
Our team is available for technical discussions with potential clinic partners, investors, and cloud programme reviewers.
Get in Touch