I am Mukul, a seasoned full-stack backend developer specializing Golang and AWS. My expertise lies in building high-performance backends and cloud-native microservices, leveraging technologies like AWS Lambda, Docker, Kafka, and SQS to deliver efficient, scalable solutions.
With a strong focus on containerization and distributed architectures, I've successfully delivered projects that follow CNCF standards. On the frontend, I have two years of experience with React and Svelte, complemented by proficiency in JavaScript libraries such as jQuery and D3.js.
Additionally, I have taken on leadership roles, managing end-to-end project development, facilitating scrum meetings, and overseeing resource management as a Team Lead.
Professional Skills
Golang (5 years)
AWS (5 years)
Databases (7 years)
Python (4 years)
GCP (3 years)
Kubernetes (4 years)
Terraform (2 years)
Jenkins (3 years)
Git (6 years)
PHP (5 years)
CodeIgniter (2 years)
Symfony (1 year)
React (1 year)
jQuery (3 years)
Recent Projects
No projects found.
Scalable Video Streaming on GCP Golang | FFmpeg | GCP | HLS
Architected and deployed a multi-tenant adaptive bitrate (ABR) video streaming pipeline on GCP, capable of handling live and VOD content at scale. This initiative reduced our third-party streaming costs by 45%.
Developed a Golang-based orchestration service that dynamically managed FFmpeg encoding jobs, optimizing video delivery profiles based on source quality and audience network conditions.
Designed a resilient, cost-effective architecture utilizing GCP Transcoder API for automated processing, ensuring 99.99% uptime and minimal manual intervention for new content.
Implemented a custom ABR logic that dynamically adjusted video quality, reducing average stream latency by 30% and improving playback performance on lower-bandwidth networks.
Integrated with Cloud Storage for immutable asset storage and a global CDN for low-latency distribution, building a highly performant and scalable media delivery system.
Engineered a low-latency, high-throughput microservices system for telecom carrier lookups, which was critical for a platform processing millions of daily transactions. My work improved lookup speed by 80% and reduced external API costs by 90%.
Architected a multi-layered caching strategy with Redis, achieving a 95% cache hit rate by intelligently pre-fetching and invalidating data, which eliminated most external lookups.
Implemented the services in Golang, leveraging goroutines and channels to handle concurrent requests efficiently, increasing the system's throughput from 500 to over 2000 requests per second.
Deployed the architecture on Google Kubernetes Engine (GKE) with Horizontal Pod Autoscaling (HPA), configuring custom metrics to scale the services proactively based on traffic patterns.
Designed a robust failover mechanism to ensure seamless operation and minimal latency even in the event of cache misses, routing requests to external APIs without service degradation.
On-Premise to AWS Cloud Migration Lead Terraform | AWS ECS | SQS | RDS | Jenkins | CloudWatch
Spearheaded the migration of a legacy monolithic application from on-premise data centers to a modern, containerized AWS architecture, resulting in a 35% reduction in operational costs and a 60% improvement in deployment velocity.
Authored the entire cloud infrastructure as code using Terraform, establishing a repeatable, version-controlled process for provisioning and managing resources.
Containerized the legacy .NET application into discrete services and deployed them on AWS ECS with Fargate, simplifying infrastructure management and enabling fine-grained scaling.
Decoupled tightly-coupled components by introducing Amazon SQS for asynchronous messaging, which increased system resilience and prevented cascading failures under load.
Implemented a full CI/CD pipeline using Jenkins, automating builds, tests, and blue/green deployments, which dramatically shortened our release cycles.
Real-time NPC Orchestration for Game Servers Golang | gRPC | Redis | AWS
Designed and built a low-latency NPC (Non-Player Character) orchestration system for a real-time multiplayer game, supporting up to 100,000 concurrent players with sub-50ms interaction latency.
Developed a microservices architecture in Golang using gRPC and Protocol Buffers for high-speed, binary communication, which was crucial for minimizing network overhead.
Engineered an event-driven state synchronization system with Redis Streams, which efficiently broadcasts NPC state changes to thousands of connected game clients.
Deployed services on AWS EKS, configuring Kubernetes HPA with custom metrics to dynamically scale based on in-game events and player density, optimizing resource allocation.
Implemented a fault-tolerant sharding logic that ensured NPC workloads were evenly distributed across game server instances, guaranteeing a seamless player experience even during instance restarts.
Game Microservices on AWS (Pickleball) Golang | AWS Lambda | ECS | API Gateway | DynamoDB
Developed a high-performance, scalable microservices backend on AWS for a popular pickleball game, handling millions of daily requests with low latency and high availability.
Architected a hybrid serverless/containerized platform, using AWS Lambda for event-driven logic (e.g., user registration) and AWS ECS for scalable, long-running game services.
Configured API Gateway as the single entry point, implementing request throttling, JWT authentication, and efficient routing to secure and manage all API traffic.
Built a fully automated CI/CD pipeline with AWS CodePipeline and CodeDeploy, which reduced our deployment time from several hours to under 10 minutes.
Utilized Amazon DynamoDB with an optimized schema for player data and leaderboards, leveraging its consistent, single-digit millisecond latency to support real-time interactions.
Designed and built a scalable, S3-compatible object storage solution using MinIO for on-premise infrastructure, providing a cost-effective and highly available alternative to public cloud storage.
Deployed a multi-node MinIO cluster with erasure coding to ensure data durability and 99.99% availability, meeting internal compliance standards.
Developed a high-performance Golang API layer (Gin framework) that managed file uploads, downloads, and metadata, achieving a sustained throughput of 2GB/s.
Designed an optimized PostgreSQL database schema for storing metadata, enabling rapid, indexed lookups for millions of files with sub-100ms query times.
Integrated with a secure token service and IAM policies to provide granular, role-based access control (RBAC) over all objects and buckets.
Led the strategic migration of a large .NET monolithic application to a cloud-native, CNCF-compliant microservices architecture, significantly improving system agility and developer productivity.
Refactored legacy business logic into independent, containerized Python-based Flask microservices, reducing service coupling by 70% and enabling parallel development streams.
Architected an event-driven system to manage communication between services, ensuring high availability and data consistency for a global financial institution.
Implemented a robust CI/CD pipeline on AWS, automating builds, tests, and blue/green deployments, which decreased release cycles from bi-weekly to daily.
Designed and implemented a resilient, horizontally scalable data pipeline for continuous ingestion and processing of high-volume real-time metrics from globally distributed sources.
Built ingestion services in Golang optimized for high throughput and backpressure handling, capable of sustaining millions of events per minute.
Integrated AWS Kinesis Data Streams for durable, ordered, and fault-tolerant event streaming across multiple regions.
Leveraged AWS Lambda for stateless transformations and enrichment, with parallel batch processing for improved efficiency.
Persisted time-series and metadata in DynamoDB with adaptive capacity scaling to handle traffic bursts without throttling.
Implemented S3-based cold storage for historical data with lifecycle policies for cost optimization.
Deployed CloudWatch Metrics, Alarms, and Dashboards to monitor throughput, error rates, and latency in real time.
Achieved zero data loss during regional failovers via cross-region Kinesis replication and checkpointing mechanisms.
Centralized Security Monitoring with Wazuh Linux | SIEM | Wazuh
Engineered and deployed a centralized, real-time security information and event management (SIEM) solution using Wazuh, which improved the company’s security posture and reduced the mean time to detect threats by 70%.
Designed and managed a large-scale deployment of Wazuh agents across a fleet of servers to collect and normalize security event data into a central log management system.
Developed custom detection rules to proactively identify sophisticated threats like rootkit detections and zero-day exploits, with an accuracy rate of 99%.
Integrated Wazuh with existing infrastructure logging and alerting tools, creating automated reports and real-time alerts for the security operations center.
Automated Email System with Supabase & Postmark Supabase | Postmark | Edge Functions
Architected a highly reliable, event-driven email automation system for deal tracking and client communication, scaling to process thousands of transactional emails daily with 99.99% deliverability.
Designed a database-driven workflow using Supabase PostgreSQL with Row-Level Security (RLS) to manage deal data and ensure secure, real-time updates.
Utilized Supabase Edge Functions and pg_cron for a scalable, webhook-triggered system that scheduled and dispatched emails based on deal state changes.
Implemented a Redis-backed worker queue to handle high-volume email throughput, ensuring the system could manage peak loads without hitting API rate limits.
Developed a comprehensive, cross-platform test suite to validate the behavior of a Kubernetes Custom Operator, ensuring its reliability and portability across various managed Kubernetes services (EKS, AKS, ARO, ROSA).
Built a BDD-style test framework in Golang using Ginkgo, simulating complex scenarios to validate the operator's reconciliation logic under stress and failure conditions.
Wrote integration tests that directly manipulated Custom Resources (CRDs) via the Kubernetes API to verify the operator's state-change and lifecycle management.
Configured the suite to run against live cloud clusters instead of local mock environments, guaranteeing true compatibility and functionality across different cloud providers.
Automated test execution within a CI/CD pipeline to prevent regressions and validate operator behavior during upgrades before each release.
Postoffice.co.uk Microservices Development Golang | AWS | DynamoDB | EKS
Engineered and deployed a dozen+ high-performance microservices, including a critical Forex module, for a major UK-based postal services and financial provider, supporting a multi-million user base.
Developed a scalable, Golang-based microservices architecture using the Gin framework, designed for high throughput and low-latency API responses.
Managed the services on AWS EKS for container orchestration, ensuring high availability and seamless scaling to meet the demands of a large user base.
Utilized Amazon DynamoDB for a NoSQL data store, leveraging its flexibility and performance for rapidly evolving data models in the financial domain.
Implemented a robust CI/CD pipeline to automate the build, test, and deployment of each microservice, streamlining development and reducing deployment risk.
Secure Multi-Tenant AI Query Pipeline TypeScript | OpenAI | Vectorize.io
Architected and built a secure, multi-tenant AI-native pipeline that provides isolated, intelligent semantic search and chat over user-specific data, leveraging vector embeddings and row-level security.
Developed a TypeScript-based query engine that dynamically injects Row-Level Security (RLS) policies into vector search queries, ensuring strict data isolation between tenants.
Integrated OpenAI embedding models to convert user documents into vector representations, enabling highly accurate and context-aware semantic search.
Designed the system to use Vectorize.io pipelines for efficient ingestion, storage, and querying of embeddings at scale, supporting millions of documents.
Led the migration of complex legacy database stored procedures to a modern, serverless AWS architecture, which improved execution time by 70% and reduced maintenance overhead significantly.
Re-engineered a suite of database stored procedures into modular AWS Lambda functions, improving maintainability, testability, and scalability.
Used AWS EventBridge to orchestrate event-driven workflows, enabling real-time execution and validation of data updates without direct database calls.
Integrated with API Gateway to provide a secure and scalable interface for the new serverless functions, enabling seamless interaction with the frontend and other services.
Kong Gateway for VoD Access Control & Filtering Kong Gateway | Go | OTT
Designed and deployed a scalable Kong API gateway for a global VoD data API, enabling dynamic, fine-grained access control and metadata filtering to secure data for hundreds of partners.
Built a custom Go-based gRPC plugin for Kong to enforce unique access policies per API key, controlling which VoD platforms a consumer could query.
Engineered a dynamic response filtering logic in Go to expose only the specific metadata fields a consumer was permitted to see, ensuring data privacy and compliance.
Architected the system to handle all traffic for the main search API, centralizing security, rate-limiting, and governance for all API consumers.
Developed a high-trust digital document platform for legally binding e-signatures and multi-stage verification with biometric validation, streamlining the signing process and ensuring compliance.
Built a Flask-based API gateway to manage the entire document lifecycle, from secure uploads to multi-party signature workflows with comprehensive audit trails.
Integrated OpenCV to perform real-time biometric validation by matching a user’s signature against their government ID, adding a critical layer of security.
Utilized Celery workers and Redis to offload CPU-intensive verification tasks (e.g., watermarking, hash validation) to background queues, ensuring the API remained responsive.
Designed and implemented a scalable, AI-driven microservices architecture on AWS, enhancing enterprise automation and security.
Architected a self-service platform on Kubernetes (EKS), creating custom controllers and operators to manage resources, user onboarding, and RBAC policies.
Integrated Wazuh as a core component for real-time security monitoring, ensuring the platform remained compliant and secure against threats.
Leveraged Grafana to build custom dashboards, providing real-time visibility into system performance, security events, and AI model health.
Enterprise API Gateway for Legacy System Modernization PHP | Symfony | RabbitMQ | Redis | Kubernetes
Architected and developed a high-performance API Gateway using Symfony, which was instrumental in decoupling over 20 modern applications from a monolithic legacy system.
Implemented fine-grained access control and dynamic routing policies, securing API interactions and ensuring that each consumer application had only the permissions it needed.
Orchestrated asynchronous data synchronization workflows via RabbitMQ, handling complex event-driven updates and ensuring eventual consistency across distributed services.
Integrated Redis for robust caching, dynamic rate limiting, and API key revocation, which significantly improved response times and system resilience under heavy load.
Engineered a scalable, low-latency backend service using the Slim framework to ingest and process millions of real-time IoT telemetry events daily, achieving 99.99% data ingestion reliability.
Integrated directly with Apache Kafka to consume high-volume data streams, implementing consumer groups for parallel processing and fault tolerance.
Developed a custom event processing pipeline to enrich, filter, and route telemetry data to various downstream analytical and alerting systems.
Designed an optimized PostgreSQL database schema for efficient storage and querying of time-series event data, supporting high-speed inserts and complex aggregations.