top of page


Tutorials, Errors and Exceptions
Its a journey to understand things better. It will have tutorials, any error/exceptions encountered, its resolutions and lots of learning.

Search


Building DevOps Intelligence using MCP Server with Spring AI: Tools, Challenges & Solutions
Devops Intelligence Today, I successfully built and deployed a Model Context Protocol (MCP) server using Spring AI that exposes real DevOps infrastructure through intelligent tools. But the journey? Let's just say it involved more debugging than coding. In this post, I'll walk you through: What we built (the DevOps Intelligence Platform) The tools we created (K8s, Prometheus, Logs, Deployments) Every challenge we faced (and how we solved them) Why Spring AI 2.0.0-M2 is the s
Ankit Agrahari
Mar 147 min read


Production Monitoring: You Can't Fix What You Can't See
Previous parts: Part 1: Kafka Producer | Part 2: Consumer + DLQ | Part 3: Real-Time Aggregations | Part 4: Docker + Kubernetes Infographics - NotebookLM You know that feeling when your app is running in production and someone asks "Is everything okay?" and you respond with "...I think so?" Yeah, that's not good enough. After deploying StreamMetrics to Kubernetes with a 3-node KRaft Kafka cluster, dockerized microservices, and validated 10K events/sec throughput, I realized
Ankit Agrahari
Mar 78 min read


From Localhost to Kubernetes: Deploying StreamMetrics at Scale
Part 4 of the StreamMetrics Series Previous parts: Part 1: Kafka Producer | Part 2: Consumer + DLQ | Part 3: Real-Time Aggregations Streammetric Bottleneck Explainer by NotebookLM Love how after building something locally, you think " okay, it works on my machine! " and then reality hits when you try to deploy it. Docker says " works on my machine " is not an excuse anymore, and Kubernetes says " hold my beer, let's make it production-ready. " This is the tale of taking Str
Ankit Agrahari
Mar 28 min read


Real-Time Aggregations with Kafka Streams at 10K Events/Sec
Part 3 of the StreamMetrics Series Previous parts: Part 1: Kafka Producer | Part 2: Consumer + DLQ | Part 4: From Localhost to Kubernetes Building production-grade streaming analytics: windows, state stores, and performance validation Overview In Parts 1 and 2, we built a Kafka producer and consumer that process individual events reliably. But processing 10,000 raw events per second creates a new problem: How do you extract insights from that fire hose of data? Enter Kafka
Ankit Agrahari
Feb 217 min read


Building Production-Grade Apache Kafka Consumer Patterns
Part 2 of the StreamMetrics Series Previous parts: Part 1: Kafka Producer | Part 3: Real-time Aggregation | Part 4: From Localhost to Kubernetes A deep-dive into manual offset management, dead letter queues, and observability with Spring Boot + Apache Kafka. This is part 2 of the series Stream Metrics application. In Part 1 we built the producer, today we add the consumer. Why This Matters Most tutorials show you how to build a Kafka consumer. They show you @KafkaListener
Ankit Agrahari
Feb 186 min read


Building a Production Kafka Producer
Part 1 of the StreamMetrics Series Previous parts: Part 2: Consumer + DLQ | Part 3: Real-Time Aggregations | Part 4: From Localhost to Kubernetes Love how it rhymes "Production Kafka Producer". Here's to coding in an era of AI Agents. They suggest and are confident that this time it will work, but then this cluster has its own plan, and it starts misbehaving -- sometimes listens, and on others, feels cornered by his other siblings. This is the Tale of making the brothers
Ankit Agrahari
Feb 166 min read


Building an AI Tutor with Spring AI, Ollama, and Vaadin
AI Tutor is a web application that uses Spring Boot and Spring AI on the backend, an Ollama -hosted LLM (e.g. Google’s Gemma3) for natural language understanding, and Vaadin for the rich web UI. Its core innovation is a Retrieval-Augmented Generation (RAG) pipeline: when the user uploads course materials (PDFs, text, etc.), the app splits them into chunks, creates vector embeddings, and stores these in a PGVector -enabled PostgreSQL database. At chat time, similar chunk
Ankit Agrahari
Jan 115 min read
Contact
bottom of page










![You think HashMap is always O(1).
It isn't. Here's what actually happens. 🧵
HashMap stores pairs using `index = hash(key) % capacity` — direct slot access, no scanning. Pure O(1). Until two keys land on the same slot. That's a collision — not a bug, a math inevitability.
Two ways to fix it 👇
🔗 Chaining — each bucket holds a linked list. Collisions append to the list. Simple, handles high load, easy deletion. Downside: pointer overhead, poor cache performance, chains degrade to O(n) at high load. Java's fix? At 8 nodes, the list auto-converts to a Red-Black Tree → O(log n) worst case.
📦 Open Addressing — no linked lists. Collision at slot X? Probe X+1, X+2 until empty. Cache-friendly, zero memory overhead. Downside: deletion needs tombstone markers, and keys cluster together making future collisions worse. Used by C++, Go, Redis.
⚖️ Load Factor = entries ÷ capacity
🟢 Below 0.5 → rare collisions, wasted memory
🟠 0.75 → Java's sweet spot, triggers resize + rehash
🔴 Above 0.9 → collision cascade, O(n) territory
Double hashing kills clustering by varying the probe step per key:
`probe(i) = (h1 + i × h2) % m`
Elements scatter evenly. No bunching. O(1) preserved.
The truth: HashMap is O(1) until a bad hash function, wrong load factor, or wrong strategy turns it into O(n).
Three things protect you:
→ Well-distributed hash function
→ Load factor under 0.75
→ Right collision strategy for your use case
💬 Java interview question: what happens when a chain hits 8 nodes?
Drop your answer below 👇
🔖 Save this before your next interview.
#java #hashmap #datastructures #dsa #algorithms
[codinginterview programming 100daysofcode]](https://scontent-atl3-2.cdninstagram.com/v/t51.71878-15/641188820_1859196944673752_5535080284006983284_n.jpg?stp=dst-jpg_e35_tt6&_nc_cat=102&ccb=7-5&_nc_sid=18de74&efg=eyJlZmdfdGFnIjoiQ0xJUFMuYmVzdF9pbWFnZV91cmxnZW4uQzMifQ%3D%3D&_nc_ohc=Uvi9A5GRc4MQ7kNvwEkK8il&_nc_oc=Adp0dbJqb619q2-fNoVkPHSEFNhEm6C59I9yoxj9jf7j3J1MKQ0UDXh2LNWifIH6lA0&_nc_zt=23&_nc_ht=scontent-atl3-2.cdninstagram.com&edm=ANo9K5cEAAAA&_nc_gid=NE6IvKZPmLzt_fHUqvFkKQ&_nc_tpa=Q5bMBQFWTPgmNAlJ1EYPV9jp3AHVfMOrXbv6Q-W2X7BVlGXYQ14oCqaH9QaoHVkgwJ8ascUrJaHNNDbF&oh=00_Af2Nk8hGx3-gG-jbkU86g9CS7vPpBch8oYPCsCf3ZJ6c6g&oe=69DC597E)


![A Bloom filter is a space-efficient probabilistic data structure used to test whether an element is a member of a set. The key trade-off: it can tell you
- with certainty that an element is not in the set,
- but it can only say an element might be in the set — never with 100% certainty.
How it works ?
- A bit array of size m, initialized to all zeros
- k independent hash functions, each mapping an element to a position in [0, m-1]
- To insert element x:
- Run x through all k hash functions → get k positions
- Set the bit at each of those positions to 1
- To check if element x exists:
- Run x through all k hash functions → get k positions
- If all bits at those positions are 1 → probably in set
- If any bit is 0 → definitely not in set
This is visually shown in the post how the bits are set during insertion or deletion.
If you want the tool for the visual representations, comment on this post, and I'll share it.
#bloomfilter #datastructure #design #softwareengineering #backenddeveloper](https://scontent-atl3-3.cdninstagram.com/v/t51.82787-15/642386646_17867541405570256_7034332257132483480_n.jpg?stp=dst-jpg_e35_tt6&_nc_cat=108&ccb=7-5&_nc_sid=18de74&efg=eyJlZmdfdGFnIjoiQ0xJUFMuYmVzdF9pbWFnZV91cmxnZW4uQzMifQ%3D%3D&_nc_ohc=rIX7mwV9jywQ7kNvwEMLZ0Z&_nc_oc=Adqg5b_bY1zvvf6XIifzdnBzfWXhSKHLFeJEs9DUl5ClQcSnOpinhdd417DnVuamQ6E&_nc_zt=23&_nc_ht=scontent-atl3-3.cdninstagram.com&edm=ANo9K5cEAAAA&_nc_gid=NE6IvKZPmLzt_fHUqvFkKQ&_nc_tpa=Q5bMBQE0PhJum6qzZOj8thU3fC54ZIudcd_R3taQFiNpT1X0UP5prDuVHpS9PpFWLxL_wKetnp3G_L8Y&oh=00_Af1kMTRjxvNJQTXLNIyyxZq4t7SUN3tQM7Cc9aqA1CWZcQ&oe=69DC889C)
















