Developed an intelligent adaptation framework to solve the problem of "Data Quality Drift" in real-time computer vision. In many real-world scenarios, object detection accuracy plummets due to environmental factors like sensor noise, motion blur, or poor lighting.
By building an image-quality-aware switching logic, I created a system that dynamically selects the optimal YOLOv5 model variant in real-time. This ensures that the system stays accurate when conditions are difficult and remains lightning-fast when the input is clear.
The system is built on a self-adaptive MAPE-K loop (Monitor, Analyze, Plan, Execute). It leverages the UPISAS framework to manage model lifecycles and synchronizes data processing across distributed Docker containers.
The framework demonstrated exceptional resilience in unstable environments. When tested against standard rate-based adaptation strategies, this quality-driven approach achieved a much higher detection confidence while significantly reducing computational overhead.
By prioritizing resource-heavy models only when the input quality demanded it, the system maintained a competitive edge in both accuracy and speed. This project proves that incorporating environmental awareness directly into AI pipelines is essential for deploying robust, real-world machine learning systems.