Home

🚀 Backend Developer | Building Fast, Stable & Secure Systems

Hello! I’m Rizqi Mulki, a Backend Developer who thrives on crafting high-performance software that doesn’t just function—it excels. My mission? To engineer solutions where speed, stability, and security are non-negotiable, because in today’s digital landscape, cutting corners isn’t an option.

With deep expertise in PHP, Python, and Node.js, I design scalable backends that handle heavy traffic seamlessly. My database skills span MariaDB, MySQL, and PostgreSQL, ensuring data is not just stored but optimized for peak efficiency. On the infrastructure side, I’m at home in Linux environments and cloud platforms like AWS & GCP, deploying resilient systems with precision.

Beyond code, I believe in sharing knowledge and growing together. I hope this website serves as both a showcase of my work and a resource for fellow developers. Whether it’s refining APIs, optimizing queries, or hardening systems against threats, I’m passionate about writing clean, maintainable code that stands the test of time.

Let’s build something remarkable—fast, stable, and secure.

rizqimulkisrc@gmail.com | +628526865056


Latest Post

Creating Your Own LLM Training Pipeline: End-to-End Implementation

Introduction Building a complete LLM training pipeline from scratch represents one of the most challenging…
Read More

Building Production-Ready LLM Systems: Scaling, Monitoring, and Deployment

Introduction The transition from experimental Large Language Model (LLM) prototypes to production-ready systems represents one…
Read More

Advanced Inference Optimization: KV-Caching, Speculative Decoding, and Parallelism

Introduction The deployment of Large Language Models (LLMs) in production environments presents significant computational challenges…
Read More

LLM Security: Jailbreaking, Adversarial Attacks, and Defense Strategies

Introduction As Large Language Models (LLMs) become increasingly integrated into critical applications—from healthcare diagnostics to…
Read More

Distributed Training: Multi-GPU and Multi-Node LLM Training

The exponential growth in Large Language Model (LLM) size has made distributed training not just…
Read More

LLM Optimization: Quantization, Pruning, and Distillation Techniques

As Large Language Models (LLMs) continue to grow in size and capability, the need for…
Read More

Building Custom LLM Architectures: Design Principles and Trade-offs

Building custom Large Language Model (LLM) architectures requires a deep understanding of fundamental design principles…
Read More

Advanced RAG Techniques: Hybrid Search, Reranking, and Graph RAG

Retrieval-Augmented Generation (RAG) has transformed how we build knowledge-intensive AI applications, enabling language models to…
Read More

Scaling Laws and Emergent Abilities in Large Language Models

The development of large language models has revealed one of the most fascinating phenomena in…
Read More

LLM Architectures Beyond Transformers: Mamba, RetNet, and Alternatives

The transformer architecture has dominated the landscape of large language models since its introduction in…
Read More