Blog #47: Long-term Tab Crash – What happens when Users never close their Tabs?
Analyzing the decision to handle Memory Leaks for an operations monitoring Dashboard system running 24/7.
I once worked on a Smart City project. Team size 8, large data scale as it received signals from thousands of traffic sensors in real-time. A specific characteristic of the users (switchboard operators) was that they opened a monitoring Dashboard on a large screen and never closed the tab. Some tabs were kept open continuously for 5-7 days.
The Problem: Caterpillars Eating Through Memory
After about 24 hours of continuous operation, the website started to slow down. After 48 hours, the browser usually reported "Aw, Snap!" (Out of memory).
We had fallen into a typical Single Page Application (SPA) trap: We often optimize for "page transitions," but forget to "clean up" for a single page that exists for too long. Every time new data arrived via WebSockets, old objects were not completely released; they accumulated like dust until they filled up the computer's entire RAM.
Options Considered
We discussed 2 solutions:
Option 1: Automatic F5 (The Brutal Way)
- Solution: Set a countdown timer; every 12 hours, the website will automatically reload entirely.
- Pros: Extremely effective at releasing 100% of memory. Easy to do (just one line of code
window.location.reload()). - Cons: Very poor UX. If the website suddenly reloads just when a user is handling an emergency, it would be a disaster. Current filter states would be lost.
Option 2: Data Lifecycle Management with Queue & Cleanup (The Elegant Way)
- Solution: Limit the number of records displayed on the UI. When the 1001st record arrives, the 1st record must be deleted from memory and the DOM. Also, perform "Manual Garbage Collection" by assigning
nullto unused variables. - Pros: Smooth, professional. The App can run for months without increasing RAM.
- Cons: Must audit the entire source code to find unused closures or forgotten event listeners. A single small error can break the whole strategy.
Final Decision and Analysis
I requested the team to implement Option 2.
// Example of limiting data set in the Store
const useDashboardStore = create((set) => ({
logs: [],
addLog: (newLog) => set((state) => {
const updatedLogs = [...state.logs, newLog];
// Keep at most 500 newest records to save RAM
if (updatedLogs.length > 500) {
updatedLogs.shift();
}
return { logs: updatedLogs };
}),
}));
Impact on Performance: The key lies in reducing DOM load. Instead of rendering 10,000 log lines, the browser only has to manage 500. The CPU also lightens significantly because it doesn't have to calculate layouts for non-existent elements.
Impact on Maintainability: The code becomes more complex because in every useEffect, we must ensure the cleanup function (return () => ...) works absolutely correctly to remove socket listeners and timers.
Impact on Team: This was the biggest lesson for Juniors about understanding the nature of RAM in JavaScript. They stopped writing code with the "let it be, the Garbage Collector will take care of it" mentality.
Self-Reflection: Was it Over-engineering?
At one point I almost chose Option 1 (Auto-reload) for speed to meet the deadline. But I realized that "avoiding" Memory Leaks doesn't help us get better. It's just hiding lack of professionalism.
If I were starting over, I would still choose manual memory management. Thanks to this incident, our system achieved extremely high Reliability—the soul of public infrastructure applications.
Notes on the silent battle against resource exhaustion.
Series • Part 47 of 50