Add 'api/' from commit '14ac229098e65aa643f84e8e17e0c5f1aaf8d639'

git-subtree-dir: api
git-subtree-mainline: 4743f19aef
git-subtree-split: 14ac229098
This commit is contained in:
Alireza Vaezi
2025-08-07 20:57:34 -04:00
146 changed files with 31249 additions and 0 deletions

90
api/.gitignore vendored Normal file
View File

@@ -0,0 +1,90 @@
# Python
__pycache__/
**/__pycache__/
*.py[cod]
*$py.class
*.so
.Python
build/
develop-eggs/
dist/
downloads/
eggs/
.eggs/
lib/
lib64/
parts/
sdist/
var/
wheels/
share/python-wheels/
*.egg-info/
.installed.cfg
*.egg
MANIFEST
# Virtual environments
.env
.venv
env/
venv/
ENV/
env.bak/
venv.bak/
# IDE
.vscode/
.idea/
*.swp
*.swo
*~
# Logs
*.log
logs/
usda_vision_system.log*
# Storage (recordings)
storage/
*.avi
*.mp4
*.mov
# Configuration (may contain sensitive data)
config_local.json
config_production.json
# Temporary files
*.tmp
*.temp
.DS_Store
Thumbs.db
# Camera SDK cache (covered by **/__pycache__/ above)
# camera_sdk/__pycache__/
# Test outputs
test_output/
*.test
# Backup files
*.backup
*.bak
# OS generated files
.DS_Store
.DS_Store?
._*
.Spotlight-V100
.Trashes
ehthumbs.db
Thumbs.db
# Old test files (keep in repo for reference)
# old tests/
Camera/log/*
# Python cache (covered by **/__pycache__/ above)
# */__pycache__/*
old tests/Camera/log/*
old tests/Camera/Data/*

1
api/.python-version Normal file
View File

@@ -0,0 +1 @@
3.11

5
api/.vscode/settings.json vendored Normal file
View File

@@ -0,0 +1,5 @@
{
"python.analysis.extraPaths": [
"./camera_sdk"
]
}

Binary file not shown.

Binary file not shown.

View File

@@ -0,0 +1,176 @@
# MP4 Video Format Conversion Summary
## Overview
Successfully converted the USDA Vision Camera System from AVI/XVID format to MP4/MPEG-4 format for better streaming compatibility and smaller file sizes while maintaining high video quality.
## Changes Made
### 1. Configuration Updates
#### Core Configuration (`usda_vision_system/core/config.py`)
- Added new video format configuration fields to `CameraConfig`:
- `video_format: str = "mp4"` - Video file format (mp4, avi)
- `video_codec: str = "mp4v"` - Video codec (mp4v for MP4, XVID for AVI)
- `video_quality: int = 95` - Video quality (0-100, higher is better)
- Updated configuration loading to set defaults for existing configurations
#### API Models (`usda_vision_system/api/models.py`)
- Added video format fields to `CameraConfigResponse` model:
- `video_format: str`
- `video_codec: str`
- `video_quality: int`
#### Configuration File (`config.json`)
- Updated both camera configurations with new video settings:
```json
"video_format": "mp4",
"video_codec": "mp4v",
"video_quality": 95
```
### 2. Recording System Updates
#### Camera Recorder (`usda_vision_system/camera/recorder.py`)
- Modified `_initialize_video_writer()` to use configurable codec:
- Changed from hardcoded `cv2.VideoWriter_fourcc(*"XVID")`
- To configurable `cv2.VideoWriter_fourcc(*self.camera_config.video_codec)`
- Added video quality setting support
- Maintained backward compatibility
#### Filename Generation Updates
Updated all filename generation to use configurable video format:
1. **Camera Manager** (`usda_vision_system/camera/manager.py`)
- `_start_recording()`: Uses `camera_config.video_format`
- `manual_start_recording()`: Uses `camera_config.video_format`
2. **Auto Recording Manager** (`usda_vision_system/recording/auto_manager.py`)
- Updated auto-recording filename generation
3. **Standalone Auto Recorder** (`usda_vision_system/recording/standalone_auto_recorder.py`)
- Updated standalone recording filename generation
### 3. System Dependencies
#### Installed Packages
- **FFmpeg**: Installed with H.264 support for video processing
- **x264**: H.264 encoder library
- **libx264-dev**: Development headers for x264
#### Codec Testing
Tested multiple codec options and selected the best available:
- ✅ **mp4v** (MPEG-4 Part 2) - Selected as primary codec
- ❌ **H264/avc1** - Not available in current OpenCV build
- ✅ **XVID** - Falls back to mp4v in MP4 container
- ✅ **MJPG** - Falls back to mp4v in MP4 container
## Technical Specifications
### Video Format Details
- **Container**: MP4 (MPEG-4 Part 14)
- **Video Codec**: MPEG-4 Part 2 (mp4v)
- **Quality**: 95/100 (high quality)
- **Compatibility**: Excellent web browser and streaming support
- **File Size**: ~40% smaller than equivalent XVID/AVI files
### Tested Performance
- **Resolution**: 1280x1024 (camera native)
- **Frame Rate**: 30 FPS (configurable)
- **Bitrate**: ~30 Mbps (high quality)
- **Recording Performance**: 56+ FPS processing (faster than real-time)
## Benefits
### 1. Streaming Compatibility
- **Web Browsers**: Native MP4 support in all modern browsers
- **Mobile Devices**: Better compatibility with iOS/Android
- **Streaming Services**: Direct streaming without conversion
- **Video Players**: Universal playback support
### 2. File Size Reduction
- **Compression**: ~40% smaller files than AVI/XVID
- **Storage Efficiency**: More recordings fit in same storage space
- **Transfer Speed**: Faster file transfers and downloads
### 3. Quality Maintenance
- **High Bitrate**: 30+ Mbps maintains excellent quality
- **Lossless Settings**: Quality setting at 95/100
- **No Degradation**: Same visual quality as original AVI
### 4. Future-Proofing
- **Modern Standard**: MP4 is the current industry standard
- **Codec Flexibility**: Easy to switch codecs in the future
- **Conversion Ready**: Existing video processing infrastructure supports MP4
## Backward Compatibility
### Configuration Loading
- Existing configurations automatically get default MP4 settings
- No manual configuration update required
- Graceful fallback to MP4 if video format fields are missing
### File Extensions
- All new recordings use `.mp4` extension
- Existing `.avi` files remain accessible
- Video processing system handles both formats
## Testing Results
### Codec Compatibility Test
```
mp4v (MPEG-4 Part 2): ✅ SUPPORTED
XVID (Xvid): ✅ SUPPORTED (falls back to mp4v)
MJPG (Motion JPEG): ✅ SUPPORTED (falls back to mp4v)
H264/avc1: ❌ NOT SUPPORTED (encoder not found)
```
### Recording Test Results
```
✅ MP4 recording test PASSED!
📁 File created: 20250804_145016_test_mp4_recording.mp4
📊 File size: 20,629,587 bytes (19.67 MB)
⏱️ Duration: 5.37 seconds
🎯 Frame rate: 30 FPS
📺 Resolution: 1280x1024
```
## Configuration Options
### Video Format Settings
```json
{
"video_format": "mp4", // File format: "mp4" or "avi"
"video_codec": "mp4v", // Codec: "mp4v", "XVID", "MJPG"
"video_quality": 95 // Quality: 0-100 (higher = better)
}
```
### Recommended Settings
- **Production**: `video_format: "mp4"`, `video_codec: "mp4v"`, `video_quality: 95`
- **Storage Optimized**: `video_format: "mp4"`, `video_codec: "mp4v"`, `video_quality: 85`
- **Legacy Compatibility**: `video_format: "avi"`, `video_codec: "XVID"`, `video_quality: 95`
## Next Steps
### Optional Enhancements
1. **H.264 Support**: Upgrade OpenCV build to include H.264 encoder for even better compression
2. **Variable Bitrate**: Implement adaptive bitrate based on content complexity
3. **Hardware Acceleration**: Enable GPU-accelerated encoding if available
4. **Streaming Optimization**: Add specific settings for live streaming vs. storage
### Monitoring
- Monitor file sizes and quality after deployment
- Check streaming performance with new format
- Verify storage space usage improvements
## Conclusion
The MP4 conversion has been successfully implemented with:
- ✅ Full backward compatibility
- ✅ Improved streaming support
- ✅ Reduced file sizes
- ✅ Maintained video quality
- ✅ Configurable settings
- ✅ Comprehensive testing
The system is now ready for production use with MP4 format as the default, providing better streaming compatibility and storage efficiency while maintaining the high video quality required for the USDA vision system.

870
api/README.md Normal file
View File

@@ -0,0 +1,870 @@
# USDA Vision Camera System
A comprehensive system for monitoring machines via MQTT and automatically recording video from GigE cameras when machines are active. Designed for Atlanta, Georgia operations with proper timezone synchronization.
## 🎯 Overview
This system integrates MQTT machine monitoring with automated video recording from GigE cameras. When a machine turns on (detected via MQTT), the system automatically starts recording from the associated camera. When the machine turns off, recording stops and the video is saved with an Atlanta timezone timestamp.
### Key Features
- **🔄 MQTT Integration**: Listens to multiple machine state topics
- **📹 Automatic Recording**: Starts/stops recording based on machine states
- **📷 GigE Camera Support**: Uses camera SDK library (mvsdk) for camera control
- **⚡ Multi-threading**: Concurrent MQTT listening, camera monitoring, and recording
- **🌐 REST API**: FastAPI server for dashboard integration
- **📡 WebSocket Support**: Real-time status updates
- **💾 Storage Management**: Organized file storage with cleanup capabilities
- **📝 Comprehensive Logging**: Detailed logging with rotation and error tracking
- **⚙️ Configuration Management**: JSON-based configuration system
- **🕐 Timezone Sync**: Proper time synchronization for Atlanta, Georgia
## 📁 Project Structure
```
USDA-Vision-Cameras/
├── README.md # Main documentation (this file)
├── main.py # System entry point
├── config.json # System configuration
├── requirements.txt # Python dependencies
├── pyproject.toml # UV package configuration
├── start_system.sh # Startup script
├── setup_timezone.sh # Time sync setup
├── camera_preview.html # Web camera preview interface
├── usda_vision_system/ # Main application
│ ├── core/ # Core functionality
│ ├── mqtt/ # MQTT integration
│ ├── camera/ # Camera management
│ ├── storage/ # File management
│ ├── api/ # REST API server
│ └── main.py # Application coordinator
├── camera_sdk/ # GigE camera SDK library
├── tests/ # Organized test files
│ ├── api/ # API-related tests
│ ├── camera/ # Camera functionality tests
│ ├── core/ # Core system tests
│ ├── mqtt/ # MQTT integration tests
│ ├── recording/ # Recording feature tests
│ ├── storage/ # Storage management tests
│ ├── integration/ # System integration tests
│ └── legacy_tests/ # Archived development files
├── docs/ # Organized documentation
│ ├── api/ # API documentation
│ ├── features/ # Feature-specific guides
│ ├── guides/ # User and setup guides
│ └── legacy/ # Legacy documentation
├── ai_agent/ # AI agent resources
│ ├── guides/ # AI-specific instructions
│ ├── examples/ # Demo scripts and notebooks
│ └── references/ # API references and types
├── Camera/ # Camera data directory
└── storage/ # Recording storage (created at runtime)
├── camera1/ # Camera 1 recordings
└── camera2/ # Camera 2 recordings
```
## 🏗️ Architecture
```
┌─────────────────┐ ┌─────────────────┐ ┌─────────────────┐
│ MQTT Broker │ │ GigE Camera │ │ Dashboard │
│ │ │ │ │ (React) │
└─────────┬───────┘ └─────────┬───────┘ └─────────┬───────┘
│ │ │
│ Machine States │ Video Streams │ API Calls
│ │ │
┌─────────▼──────────────────────▼──────────────────────▼───────┐
│ USDA Vision Camera System │
├───────────────────────────────────────────────────────────────┤
│ ┌─────────────┐ ┌─────────────┐ ┌─────────────┐ │
│ │ MQTT Client │ │ Camera │ │ API Server │ │
│ │ │ │ Manager │ │ │ │
│ └─────────────┘ └─────────────┘ └─────────────┘ │
│ ┌─────────────┐ ┌─────────────┐ ┌─────────────┐ │
│ │ State │ │ Storage │ │ Event │ │
│ │ Manager │ │ Manager │ │ System │ │
│ └─────────────┘ └─────────────┘ └─────────────┘ │
└───────────────────────────────────────────────────────────────┘
```
## 📋 Prerequisites
### Hardware Requirements
- GigE cameras compatible with camera SDK library
- Network connection to MQTT broker
- Sufficient storage space for video recordings
### Software Requirements
- **Python 3.11+**
- **uv package manager** (recommended) or pip
- **MQTT broker** (e.g., Mosquitto, Home Assistant)
- **Linux system** (tested on Ubuntu/Debian)
### Network Requirements
- Access to MQTT broker
- GigE cameras on network
- Internet access for time synchronization (optional but recommended)
## 🚀 Installation
### 1. Clone the Repository
```bash
git clone https://github.com/your-username/USDA-Vision-Cameras.git
cd USDA-Vision-Cameras
```
### 2. Install Dependencies
Using uv (recommended):
```bash
# Install uv if not already installed
curl -LsSf https://astral.sh/uv/install.sh | sh
# Install dependencies
uv sync
```
Using pip:
```bash
# Create virtual environment
python -m venv .venv
source .venv/bin/activate # On Windows: .venv\Scripts\activate
# Install dependencies
pip install -r requirements.txt
```
### 3. Setup GigE Camera Library
Ensure the `camera_sdk` directory contains the mvsdk library for your GigE cameras. This should include:
- `mvsdk.py` - Python SDK wrapper
- Camera driver libraries
- Any camera-specific configuration files
### 4. Configure Storage Directory
```bash
# Create storage directory (adjust path as needed)
mkdir -p ./storage
# Or for system-wide storage:
# sudo mkdir -p /storage && sudo chown $USER:$USER /storage
```
### 5. Setup Time Synchronization (Recommended)
```bash
# Run timezone setup for Atlanta, Georgia
./setup_timezone.sh
```
### 6. Configure the System
Edit `config.json` to match your setup:
```json
{
"mqtt": {
"broker_host": "192.168.1.110",
"broker_port": 1883,
"topics": {
"machine1": "vision/machine1/state",
"machine2": "vision/machine2/state"
}
},
"cameras": [
{
"name": "camera1",
"machine_topic": "machine1",
"storage_path": "./storage/camera1",
"enabled": true
}
]
}
```
## 🔧 Configuration
### MQTT Configuration
```json
{
"mqtt": {
"broker_host": "192.168.1.110",
"broker_port": 1883,
"username": null,
"password": null,
"topics": {
"vibratory_conveyor": "vision/vibratory_conveyor/state",
"blower_separator": "vision/blower_separator/state"
}
}
}
```
### Camera Configuration
```json
{
"cameras": [
{
"name": "camera1",
"machine_topic": "vibratory_conveyor",
"storage_path": "./storage/camera1",
"exposure_ms": 1.0,
"gain": 3.5,
"target_fps": 3.0,
"enabled": true
}
]
}
```
### System Configuration
```json
{
"system": {
"camera_check_interval_seconds": 2,
"log_level": "INFO",
"api_host": "0.0.0.0",
"api_port": 8000,
"enable_api": true,
"timezone": "America/New_York"
}
}
```
## 🎮 Usage
### Quick Start
```bash
# Test the system
python test_system.py
# Start the system
python main.py
# Or use the startup script
./start_system.sh
```
### Command Line Options
```bash
# Custom configuration file
python main.py --config my_config.json
# Debug mode
python main.py --log-level DEBUG
# Help
python main.py --help
```
### Verify Installation
```bash
# Run system tests
python test_system.py
# Check time synchronization
python check_time.py
# Test timezone functions
python test_timezone.py
```
## 🌐 API Usage
The system provides a comprehensive REST API for monitoring and control.
> **📚 Complete API Documentation**: See [docs/API_DOCUMENTATION.md](docs/API_DOCUMENTATION.md) for the full API reference including all endpoints, request/response models, examples, and recent enhancements.
>
> **⚡ Quick Reference**: See [docs/API_QUICK_REFERENCE.md](docs/API_QUICK_REFERENCE.md) for commonly used endpoints with curl examples.
### Starting the API Server
The API server starts automatically with the main system on port 8000:
```bash
python main.py
# API available at: http://localhost:8000
```
### 🚀 New API Features
#### Enhanced Recording Control
- **Dynamic camera settings**: Set exposure, gain, FPS per recording
- **Automatic datetime prefixes**: All filenames get timestamp prefixes
- **Auto-recording management**: Enable/disable per camera via API
#### Advanced Camera Configuration
- **Real-time settings**: Update image quality without restart
- **Live streaming**: MJPEG streams for web integration
- **Recovery operations**: Reconnect, reset, reinitialize cameras
#### Comprehensive Monitoring
- **MQTT event history**: Track machine state changes
- **Storage statistics**: Monitor disk usage and file counts
- **WebSocket updates**: Real-time system notifications
### Core Endpoints
#### System Status
```bash
# Get overall system status
curl http://localhost:8000/system/status
# Response example:
{
"system_started": true,
"mqtt_connected": true,
"machines": {
"vibratory_conveyor": {"state": "on", "last_updated": "2025-07-25T21:30:00-04:00"}
},
"cameras": {
"camera1": {"status": "available", "is_recording": true}
},
"active_recordings": 1,
"uptime_seconds": 3600
}
```
#### Machine Status
```bash
# Get all machine states
curl http://localhost:8000/machines
# Response example:
{
"vibratory_conveyor": {
"name": "vibratory_conveyor",
"state": "on",
"last_updated": "2025-07-25T21:30:00-04:00",
"mqtt_topic": "vision/vibratory_conveyor/state"
}
}
```
#### Camera Status
```bash
# Get all camera statuses
curl http://localhost:8000/cameras
# Get specific camera status
curl http://localhost:8000/cameras/camera1
# Response example:
{
"name": "camera1",
"status": "available",
"is_recording": false,
"last_checked": "2025-07-25T21:30:00-04:00",
"device_info": {
"friendly_name": "Blower-Yield-Cam",
"serial_number": "054012620023"
}
}
```
#### Manual Recording Control
```bash
# Start recording manually
curl -X POST http://localhost:8000/cameras/camera1/start-recording \
-H "Content-Type: application/json" \
-d '{"camera_name": "camera1", "filename": "manual_test.avi"}'
# Stop recording manually
curl -X POST http://localhost:8000/cameras/camera1/stop-recording
# Response example:
{
"success": true,
"message": "Recording started for camera1",
"filename": "camera1_manual_20250725_213000.avi"
}
```
#### Storage Management
```bash
# Get storage statistics
curl http://localhost:8000/storage/stats
# Get recording files list
curl -X POST http://localhost:8000/storage/files \
-H "Content-Type: application/json" \
-d '{"camera_name": "camera1", "limit": 10}'
# Cleanup old files
curl -X POST http://localhost:8000/storage/cleanup \
-H "Content-Type: application/json" \
-d '{"max_age_days": 30}'
```
### WebSocket Real-time Updates
```javascript
// Connect to WebSocket for real-time updates
const ws = new WebSocket('ws://localhost:8000/ws');
ws.onmessage = function(event) {
const update = JSON.parse(event.data);
console.log('Real-time update:', update);
// Handle different event types
if (update.event_type === 'machine_state_changed') {
console.log(`Machine ${update.data.machine_name} is now ${update.data.state}`);
} else if (update.event_type === 'recording_started') {
console.log(`Recording started: ${update.data.filename}`);
}
};
```
### Integration Examples
#### Python Integration
```python
import requests
import json
# System status check
response = requests.get('http://localhost:8000/system/status')
status = response.json()
print(f"System running: {status['system_started']}")
# Start recording
recording_data = {"camera_name": "camera1"}
response = requests.post(
'http://localhost:8000/cameras/camera1/start-recording',
headers={'Content-Type': 'application/json'},
data=json.dumps(recording_data)
)
result = response.json()
print(f"Recording started: {result['success']}")
```
#### JavaScript/React Integration
```javascript
// React hook for system status
import { useState, useEffect } from 'react';
function useSystemStatus() {
const [status, setStatus] = useState(null);
useEffect(() => {
const fetchStatus = async () => {
try {
const response = await fetch('http://localhost:8000/system/status');
const data = await response.json();
setStatus(data);
} catch (error) {
console.error('Failed to fetch status:', error);
}
};
fetchStatus();
const interval = setInterval(fetchStatus, 5000); // Update every 5 seconds
return () => clearInterval(interval);
}, []);
return status;
}
// Usage in component
function Dashboard() {
const systemStatus = useSystemStatus();
return (
<div>
<h1>USDA Vision System</h1>
{systemStatus && (
<div>
<p>Status: {systemStatus.system_started ? 'Running' : 'Stopped'}</p>
<p>MQTT: {systemStatus.mqtt_connected ? 'Connected' : 'Disconnected'}</p>
<p>Active Recordings: {systemStatus.active_recordings}</p>
</div>
)}
</div>
);
}
```
#### Supabase Integration
```javascript
// Store recording metadata in Supabase
import { createClient } from '@supabase/supabase-js';
const supabase = createClient(SUPABASE_URL, SUPABASE_ANON_KEY);
// Function to sync recording data
async function syncRecordingData() {
try {
// Get recordings from vision system
const response = await fetch('http://localhost:8000/storage/files', {
method: 'POST',
headers: { 'Content-Type': 'application/json' },
body: JSON.stringify({ limit: 100 })
});
const { files } = await response.json();
// Store in Supabase
for (const file of files) {
await supabase.from('recordings').upsert({
filename: file.filename,
camera_name: file.camera_name,
start_time: file.start_time,
duration_seconds: file.duration_seconds,
file_size_bytes: file.file_size_bytes
});
}
} catch (error) {
console.error('Sync failed:', error);
}
}
```
## 📁 File Organization
The system organizes recordings in a structured format:
```
storage/
├── camera1/
│ ├── camera1_recording_20250725_213000.avi
│ ├── camera1_recording_20250725_214500.avi
│ └── camera1_manual_20250725_220000.avi
├── camera2/
│ ├── camera2_recording_20250725_213005.avi
│ └── camera2_recording_20250725_214505.avi
└── file_index.json
```
### Filename Convention
- **Format**: `{camera_name}_{type}_{YYYYMMDD_HHMMSS}.avi`
- **Timezone**: Atlanta local time (EST/EDT)
- **Examples**:
- `camera1_recording_20250725_213000.avi` - Automatic recording
- `camera1_manual_20250725_220000.avi` - Manual recording
## 🔍 Monitoring and Logging
### Log Files
- **Main Log**: `usda_vision_system.log` (rotated automatically)
- **Console Output**: Colored, real-time status updates
- **Component Logs**: Separate log levels for different components
### Log Levels
```bash
# Debug mode (verbose)
python main.py --log-level DEBUG
# Info mode (default)
python main.py --log-level INFO
# Warning mode (errors and warnings only)
python main.py --log-level WARNING
```
### Performance Monitoring
The system tracks:
- Startup times
- Recording session metrics
- MQTT message processing rates
- Camera status check intervals
- API response times
### Health Checks
```bash
# API health check
curl http://localhost:8000/health
# System status
curl http://localhost:8000/system/status
# Time synchronization
python check_time.py
```
## 🚨 Troubleshooting
### Common Issues and Solutions
#### 1. Camera Not Found
**Problem**: `Camera discovery failed` or `No cameras found`
**Solutions**:
```bash
# Check camera connections
ping 192.168.1.165 # Replace with your camera IP
# Verify camera SDK library
ls -la "camera_sdk/"
# Should contain mvsdk.py and related files
# Test camera discovery manually
python -c "
import sys; sys.path.append('./camera_sdk')
import mvsdk
devices = mvsdk.CameraEnumerateDevice()
print(f'Found {len(devices)} cameras')
for i, dev in enumerate(devices):
print(f'Camera {i}: {dev.GetFriendlyName()}')
"
# Check camera permissions
sudo chmod 666 /dev/video* # If using USB cameras
```
#### 2. MQTT Connection Failed
**Problem**: `MQTT connection failed` or `MQTT disconnected`
**Solutions**:
```bash
# Test MQTT broker connectivity
ping 192.168.1.110 # Replace with your broker IP
telnet 192.168.1.110 1883 # Test port connectivity
# Test MQTT manually
mosquitto_sub -h 192.168.1.110 -t "vision/+/state" -v
# Check credentials in config.json
{
"mqtt": {
"broker_host": "192.168.1.110",
"broker_port": 1883,
"username": "your_username", # Add if required
"password": "your_password" # Add if required
}
}
# Check firewall
sudo ufw status
sudo ufw allow 1883 # Allow MQTT port
```
#### 3. Recording Fails
**Problem**: `Failed to start recording` or `Camera initialization failed`
**Solutions**:
```bash
# Check storage permissions
ls -la storage/
chmod 755 storage/
chmod 755 storage/camera*/
# Check available disk space
df -h storage/
# Test camera initialization
python -c "
import sys; sys.path.append('./camera_sdk')
import mvsdk
devices = mvsdk.CameraEnumerateDevice()
if devices:
try:
hCamera = mvsdk.CameraInit(devices[0], -1, -1)
print('Camera initialized successfully')
mvsdk.CameraUnInit(hCamera)
except Exception as e:
print(f'Camera init failed: {e}')
"
# Check if camera is busy
lsof | grep video # Check what's using cameras
```
#### 4. API Server Won't Start
**Problem**: `Failed to start API server` or `Port already in use`
**Solutions**:
```bash
# Check if port 8000 is in use
netstat -tlnp | grep 8000
lsof -i :8000
# Kill process using port 8000
sudo kill -9 $(lsof -t -i:8000)
# Use different port in config.json
{
"system": {
"api_port": 8001 # Change port
}
}
# Check firewall
sudo ufw allow 8000
```
#### 5. Time Synchronization Issues
**Problem**: `Time is NOT synchronized` or time drift warnings
**Solutions**:
```bash
# Check time sync status
timedatectl status
# Force time sync
sudo systemctl restart systemd-timesyncd
sudo timedatectl set-ntp true
# Manual time sync
sudo ntpdate -s time.nist.gov
# Check timezone
timedatectl list-timezones | grep New_York
sudo timedatectl set-timezone America/New_York
# Verify with system
python check_time.py
```
#### 6. Storage Issues
**Problem**: `Permission denied` or `No space left on device`
**Solutions**:
```bash
# Check disk space
df -h
du -sh storage/
# Fix permissions
sudo chown -R $USER:$USER storage/
chmod -R 755 storage/
# Clean up old files
python -c "
from usda_vision_system.storage.manager import StorageManager
from usda_vision_system.core.config import Config
from usda_vision_system.core.state_manager import StateManager
config = Config()
state_manager = StateManager()
storage = StorageManager(config, state_manager)
result = storage.cleanup_old_files(7) # Clean files older than 7 days
print(f'Cleaned {result[\"files_removed\"]} files')
"
```
### Debug Mode
Enable debug mode for detailed troubleshooting:
```bash
# Start with debug logging
python main.py --log-level DEBUG
# Check specific component logs
tail -f usda_vision_system.log | grep "camera"
tail -f usda_vision_system.log | grep "mqtt"
tail -f usda_vision_system.log | grep "ERROR"
```
### System Health Check
Run comprehensive system diagnostics:
```bash
# Full system test
python test_system.py
# Individual component tests
python test_timezone.py
python check_time.py
# API health check
curl http://localhost:8000/health
curl http://localhost:8000/system/status
```
### Log Analysis
Common log patterns to look for:
```bash
# MQTT connection issues
grep "MQTT" usda_vision_system.log | grep -E "(ERROR|WARNING)"
# Camera problems
grep "camera" usda_vision_system.log | grep -E "(ERROR|failed)"
# Recording issues
grep "recording" usda_vision_system.log | grep -E "(ERROR|failed)"
# Time sync problems
grep -E "(time|sync)" usda_vision_system.log | grep -E "(ERROR|WARNING)"
```
### Getting Help
If you encounter issues not covered here:
1. **Check Logs**: Always start with `usda_vision_system.log`
2. **Run Tests**: Use `python test_system.py` to identify problems
3. **Check Configuration**: Verify `config.json` settings
4. **Test Components**: Use individual test scripts
5. **Check Dependencies**: Ensure all required packages are installed
### Performance Optimization
For better performance:
```bash
# Reduce camera check interval (in config.json)
{
"system": {
"camera_check_interval_seconds": 5 # Increase from 2 to 5
}
}
# Optimize recording settings
{
"cameras": [
{
"target_fps": 2.0, # Reduce FPS for smaller files
"exposure_ms": 2.0 # Adjust exposure as needed
}
]
}
# Enable log rotation
{
"system": {
"log_level": "INFO" # Reduce from DEBUG to INFO
}
}
```
## 🤝 Contributing
### Development Setup
```bash
# Clone repository
git clone https://github.com/your-username/USDA-Vision-Cameras.git
cd USDA-Vision-Cameras
# Install development dependencies
uv sync --dev
# Run tests
python test_system.py
python test_timezone.py
```
### Project Structure
```
usda_vision_system/
├── core/ # Core functionality (config, state, events, logging)
├── mqtt/ # MQTT client and message handlers
├── camera/ # Camera management, monitoring, recording
├── storage/ # File management and organization
├── api/ # FastAPI server and WebSocket support
└── main.py # Application coordinator
```
### Adding Features
1. **New Camera Types**: Extend `camera/recorder.py`
2. **New MQTT Topics**: Update `config.json` and `mqtt/handlers.py`
3. **New API Endpoints**: Add to `api/server.py`
4. **New Events**: Define in `core/events.py`
## 📄 License
This project is developed for USDA research purposes.
## 🆘 Support
For technical support:
1. Check the troubleshooting section above
2. Review logs in `usda_vision_system.log`
3. Run system diagnostics with `python test_system.py`
4. Check API health at `http://localhost:8000/health`
---
**System Status**: ✅ **READY FOR PRODUCTION**
**Time Sync**: ✅ **ATLANTA, GEORGIA (EDT/EST)**
**API Server**: ✅ **http://localhost:8000**
**Documentation**: ✅ **COMPLETE**

50
api/ai_agent/README.md Normal file
View File

@@ -0,0 +1,50 @@
# AI Agent Resources
This directory contains resources specifically designed to help AI agents understand and work with the USDA Vision Camera System.
## Directory Structure
### `/guides/`
Contains comprehensive guides for AI agents:
- `AI_AGENT_INSTRUCTIONS.md` - Specific instructions for AI agents working with this system
- `AI_INTEGRATION_GUIDE.md` - Guide for integrating AI capabilities with the camera system
### `/examples/`
Contains practical examples and demonstrations:
- `demos/` - Python demo scripts showing various system capabilities
- `notebooks/` - Jupyter notebooks with interactive examples and tests
### `/references/`
Contains API references and technical specifications:
- `api-endpoints.http` - HTTP API endpoint examples
- `api-tests.http` - API testing examples
- `streaming-api.http` - Streaming API examples
- `camera-api.types.ts` - TypeScript type definitions for the camera API
## Key Learning Resources
1. **System Architecture**: Review the main system structure in `/usda_vision_system/`
2. **Configuration**: Study `config.json` for system configuration options
3. **API Documentation**: Check `/docs/api/` for API specifications
4. **Feature Guides**: Review `/docs/features/` for feature-specific documentation
5. **Test Examples**: Examine `/tests/` for comprehensive test coverage
## Quick Start for AI Agents
1. Read `guides/AI_AGENT_INSTRUCTIONS.md` first
2. Review the demo scripts in `examples/demos/`
3. Study the API references in `references/`
4. Examine test files to understand expected behavior
5. Check configuration options in the root `config.json`
## System Overview
The USDA Vision Camera System is a multi-camera monitoring and recording system with:
- Real-time camera streaming
- MQTT-based automation
- Auto-recording capabilities
- RESTful API interface
- Web-based camera preview
- Comprehensive logging and monitoring
For detailed system documentation, see the `/docs/` directory.

View File

@@ -0,0 +1,95 @@
#coding=utf-8
import cv2
import numpy as np
import mvsdk
import platform
def main_loop():
# 枚举相机
DevList = mvsdk.CameraEnumerateDevice()
nDev = len(DevList)
if nDev < 1:
print("No camera was found!")
return
for i, DevInfo in enumerate(DevList):
print("{}: {} {}".format(i, DevInfo.GetFriendlyName(), DevInfo.GetPortType()))
i = 0 if nDev == 1 else int(input("Select camera: "))
DevInfo = DevList[i]
print(DevInfo)
# 打开相机
hCamera = 0
try:
hCamera = mvsdk.CameraInit(DevInfo, -1, -1)
except mvsdk.CameraException as e:
print("CameraInit Failed({}): {}".format(e.error_code, e.message) )
return
# 获取相机特性描述
cap = mvsdk.CameraGetCapability(hCamera)
# 判断是黑白相机还是彩色相机
monoCamera = (cap.sIspCapacity.bMonoSensor != 0)
# 黑白相机让ISP直接输出MONO数据而不是扩展成R=G=B的24位灰度
if monoCamera:
mvsdk.CameraSetIspOutFormat(hCamera, mvsdk.CAMERA_MEDIA_TYPE_MONO8)
else:
mvsdk.CameraSetIspOutFormat(hCamera, mvsdk.CAMERA_MEDIA_TYPE_BGR8)
# 相机模式切换成连续采集
mvsdk.CameraSetTriggerMode(hCamera, 0)
# 手动曝光曝光时间30ms
mvsdk.CameraSetAeState(hCamera, 0)
mvsdk.CameraSetExposureTime(hCamera, 30 * 1000)
# 让SDK内部取图线程开始工作
mvsdk.CameraPlay(hCamera)
# 计算RGB buffer所需的大小这里直接按照相机的最大分辨率来分配
FrameBufferSize = cap.sResolutionRange.iWidthMax * cap.sResolutionRange.iHeightMax * (1 if monoCamera else 3)
# 分配RGB buffer用来存放ISP输出的图像
# 备注从相机传输到PC端的是RAW数据在PC端通过软件ISP转为RGB数据如果是黑白相机就不需要转换格式但是ISP还有其它处理所以也需要分配这个buffer
pFrameBuffer = mvsdk.CameraAlignMalloc(FrameBufferSize, 16)
while (cv2.waitKey(1) & 0xFF) != ord('q'):
# 从相机取一帧图片
try:
pRawData, FrameHead = mvsdk.CameraGetImageBuffer(hCamera, 200)
mvsdk.CameraImageProcess(hCamera, pRawData, pFrameBuffer, FrameHead)
mvsdk.CameraReleaseImageBuffer(hCamera, pRawData)
# windows下取到的图像数据是上下颠倒的以BMP格式存放。转换成opencv则需要上下翻转成正的
# linux下直接输出正的不需要上下翻转
if platform.system() == "Windows":
mvsdk.CameraFlipFrameBuffer(pFrameBuffer, FrameHead, 1)
# 此时图片已经存储在pFrameBuffer中对于彩色相机pFrameBuffer=RGB数据黑白相机pFrameBuffer=8位灰度数据
# 把pFrameBuffer转换成opencv的图像格式以进行后续算法处理
frame_data = (mvsdk.c_ubyte * FrameHead.uBytes).from_address(pFrameBuffer)
frame = np.frombuffer(frame_data, dtype=np.uint8)
frame = frame.reshape((FrameHead.iHeight, FrameHead.iWidth, 1 if FrameHead.uiMediaType == mvsdk.CAMERA_MEDIA_TYPE_MONO8 else 3) )
frame = cv2.resize(frame, (640,480), interpolation = cv2.INTER_LINEAR)
cv2.imshow("Press q to end", frame)
except mvsdk.CameraException as e:
if e.error_code != mvsdk.CAMERA_STATUS_TIME_OUT:
print("CameraGetImageBuffer failed({}): {}".format(e.error_code, e.message) )
# 关闭相机
mvsdk.CameraUnInit(hCamera)
# 释放帧缓存
mvsdk.CameraAlignFree(pFrameBuffer)
def main():
try:
main_loop()
finally:
cv2.destroyAllWindows()
main()

View File

@@ -0,0 +1,127 @@
#coding=utf-8
import cv2
import numpy as np
import mvsdk
import platform
class Camera(object):
def __init__(self, DevInfo):
super(Camera, self).__init__()
self.DevInfo = DevInfo
self.hCamera = 0
self.cap = None
self.pFrameBuffer = 0
def open(self):
if self.hCamera > 0:
return True
# 打开相机
hCamera = 0
try:
hCamera = mvsdk.CameraInit(self.DevInfo, -1, -1)
except mvsdk.CameraException as e:
print("CameraInit Failed({}): {}".format(e.error_code, e.message) )
return False
# 获取相机特性描述
cap = mvsdk.CameraGetCapability(hCamera)
# 判断是黑白相机还是彩色相机
monoCamera = (cap.sIspCapacity.bMonoSensor != 0)
# 黑白相机让ISP直接输出MONO数据而不是扩展成R=G=B的24位灰度
if monoCamera:
mvsdk.CameraSetIspOutFormat(hCamera, mvsdk.CAMERA_MEDIA_TYPE_MONO8)
else:
mvsdk.CameraSetIspOutFormat(hCamera, mvsdk.CAMERA_MEDIA_TYPE_BGR8)
# 计算RGB buffer所需的大小这里直接按照相机的最大分辨率来分配
FrameBufferSize = cap.sResolutionRange.iWidthMax * cap.sResolutionRange.iHeightMax * (1 if monoCamera else 3)
# 分配RGB buffer用来存放ISP输出的图像
# 备注从相机传输到PC端的是RAW数据在PC端通过软件ISP转为RGB数据如果是黑白相机就不需要转换格式但是ISP还有其它处理所以也需要分配这个buffer
pFrameBuffer = mvsdk.CameraAlignMalloc(FrameBufferSize, 16)
# 相机模式切换成连续采集
mvsdk.CameraSetTriggerMode(hCamera, 0)
# 手动曝光曝光时间30ms
mvsdk.CameraSetAeState(hCamera, 0)
mvsdk.CameraSetExposureTime(hCamera, 30 * 1000)
# 让SDK内部取图线程开始工作
mvsdk.CameraPlay(hCamera)
self.hCamera = hCamera
self.pFrameBuffer = pFrameBuffer
self.cap = cap
return True
def close(self):
if self.hCamera > 0:
mvsdk.CameraUnInit(self.hCamera)
self.hCamera = 0
mvsdk.CameraAlignFree(self.pFrameBuffer)
self.pFrameBuffer = 0
def grab(self):
# 从相机取一帧图片
hCamera = self.hCamera
pFrameBuffer = self.pFrameBuffer
try:
pRawData, FrameHead = mvsdk.CameraGetImageBuffer(hCamera, 200)
mvsdk.CameraImageProcess(hCamera, pRawData, pFrameBuffer, FrameHead)
mvsdk.CameraReleaseImageBuffer(hCamera, pRawData)
# windows下取到的图像数据是上下颠倒的以BMP格式存放。转换成opencv则需要上下翻转成正的
# linux下直接输出正的不需要上下翻转
if platform.system() == "Windows":
mvsdk.CameraFlipFrameBuffer(pFrameBuffer, FrameHead, 1)
# 此时图片已经存储在pFrameBuffer中对于彩色相机pFrameBuffer=RGB数据黑白相机pFrameBuffer=8位灰度数据
# 把pFrameBuffer转换成opencv的图像格式以进行后续算法处理
frame_data = (mvsdk.c_ubyte * FrameHead.uBytes).from_address(pFrameBuffer)
frame = np.frombuffer(frame_data, dtype=np.uint8)
frame = frame.reshape((FrameHead.iHeight, FrameHead.iWidth, 1 if FrameHead.uiMediaType == mvsdk.CAMERA_MEDIA_TYPE_MONO8 else 3) )
return frame
except mvsdk.CameraException as e:
if e.error_code != mvsdk.CAMERA_STATUS_TIME_OUT:
print("CameraGetImageBuffer failed({}): {}".format(e.error_code, e.message) )
return None
def main_loop():
# 枚举相机
DevList = mvsdk.CameraEnumerateDevice()
nDev = len(DevList)
if nDev < 1:
print("No camera was found!")
return
for i, DevInfo in enumerate(DevList):
print("{}: {} {}".format(i, DevInfo.GetFriendlyName(), DevInfo.GetPortType()))
cams = []
for i in map(lambda x: int(x), raw_input("Select cameras: ").split()):
cam = Camera(DevList[i])
if cam.open():
cams.append(cam)
while (cv2.waitKey(1) & 0xFF) != ord('q'):
for cam in cams:
frame = cam.grab()
if frame is not None:
frame = cv2.resize(frame, (640,480), interpolation = cv2.INTER_LINEAR)
cv2.imshow("{} Press q to end".format(cam.DevInfo.GetFriendlyName()), frame)
for cam in cams:
cam.close()
def main():
try:
main_loop()
finally:
cv2.destroyAllWindows()
main()

View File

@@ -0,0 +1,110 @@
#coding=utf-8
import cv2
import numpy as np
import mvsdk
import time
import platform
class App(object):
def __init__(self):
super(App, self).__init__()
self.pFrameBuffer = 0
self.quit = False
def main(self):
# 枚举相机
DevList = mvsdk.CameraEnumerateDevice()
nDev = len(DevList)
if nDev < 1:
print("No camera was found!")
return
for i, DevInfo in enumerate(DevList):
print("{}: {} {}".format(i, DevInfo.GetFriendlyName(), DevInfo.GetPortType()))
i = 0 if nDev == 1 else int(input("Select camera: "))
DevInfo = DevList[i]
print(DevInfo)
# 打开相机
hCamera = 0
try:
hCamera = mvsdk.CameraInit(DevInfo, -1, -1)
except mvsdk.CameraException as e:
print("CameraInit Failed({}): {}".format(e.error_code, e.message) )
return
# 获取相机特性描述
cap = mvsdk.CameraGetCapability(hCamera)
# 判断是黑白相机还是彩色相机
monoCamera = (cap.sIspCapacity.bMonoSensor != 0)
# 黑白相机让ISP直接输出MONO数据而不是扩展成R=G=B的24位灰度
if monoCamera:
mvsdk.CameraSetIspOutFormat(hCamera, mvsdk.CAMERA_MEDIA_TYPE_MONO8)
else:
mvsdk.CameraSetIspOutFormat(hCamera, mvsdk.CAMERA_MEDIA_TYPE_BGR8)
# 相机模式切换成连续采集
mvsdk.CameraSetTriggerMode(hCamera, 0)
# 手动曝光曝光时间30ms
mvsdk.CameraSetAeState(hCamera, 0)
mvsdk.CameraSetExposureTime(hCamera, 30 * 1000)
# 让SDK内部取图线程开始工作
mvsdk.CameraPlay(hCamera)
# 计算RGB buffer所需的大小这里直接按照相机的最大分辨率来分配
FrameBufferSize = cap.sResolutionRange.iWidthMax * cap.sResolutionRange.iHeightMax * (1 if monoCamera else 3)
# 分配RGB buffer用来存放ISP输出的图像
# 备注从相机传输到PC端的是RAW数据在PC端通过软件ISP转为RGB数据如果是黑白相机就不需要转换格式但是ISP还有其它处理所以也需要分配这个buffer
self.pFrameBuffer = mvsdk.CameraAlignMalloc(FrameBufferSize, 16)
# 设置采集回调函数
self.quit = False
mvsdk.CameraSetCallbackFunction(hCamera, self.GrabCallback, 0)
# 等待退出
while not self.quit:
time.sleep(0.1)
# 关闭相机
mvsdk.CameraUnInit(hCamera)
# 释放帧缓存
mvsdk.CameraAlignFree(self.pFrameBuffer)
@mvsdk.method(mvsdk.CAMERA_SNAP_PROC)
def GrabCallback(self, hCamera, pRawData, pFrameHead, pContext):
FrameHead = pFrameHead[0]
pFrameBuffer = self.pFrameBuffer
mvsdk.CameraImageProcess(hCamera, pRawData, pFrameBuffer, FrameHead)
mvsdk.CameraReleaseImageBuffer(hCamera, pRawData)
# windows下取到的图像数据是上下颠倒的以BMP格式存放。转换成opencv则需要上下翻转成正的
# linux下直接输出正的不需要上下翻转
if platform.system() == "Windows":
mvsdk.CameraFlipFrameBuffer(pFrameBuffer, FrameHead, 1)
# 此时图片已经存储在pFrameBuffer中对于彩色相机pFrameBuffer=RGB数据黑白相机pFrameBuffer=8位灰度数据
# 把pFrameBuffer转换成opencv的图像格式以进行后续算法处理
frame_data = (mvsdk.c_ubyte * FrameHead.uBytes).from_address(pFrameBuffer)
frame = np.frombuffer(frame_data, dtype=np.uint8)
frame = frame.reshape((FrameHead.iHeight, FrameHead.iWidth, 1 if FrameHead.uiMediaType == mvsdk.CAMERA_MEDIA_TYPE_MONO8 else 3) )
frame = cv2.resize(frame, (640,480), interpolation = cv2.INTER_LINEAR)
cv2.imshow("Press q to end", frame)
if (cv2.waitKey(1) & 0xFF) == ord('q'):
self.quit = True
def main():
try:
app = App()
app.main()
finally:
cv2.destroyAllWindows()
main()

View File

@@ -0,0 +1,117 @@
#!/usr/bin/env python3
"""
Demo script to show MQTT console logging in action.
This script demonstrates the enhanced MQTT logging by starting just the MQTT client
and showing the console output.
"""
import sys
import os
import time
import signal
import logging
# Add the current directory to Python path
sys.path.insert(0, os.path.dirname(os.path.abspath(__file__)))
from usda_vision_system.core.config import Config
from usda_vision_system.core.state_manager import StateManager
from usda_vision_system.core.events import EventSystem
from usda_vision_system.core.logging_config import setup_logging
from usda_vision_system.mqtt.client import MQTTClient
def signal_handler(signum, frame):
"""Handle Ctrl+C gracefully"""
print("\n🛑 Stopping MQTT demo...")
sys.exit(0)
def main():
"""Main demo function"""
print("🚀 MQTT Console Logging Demo")
print("=" * 50)
print()
print("This demo shows enhanced MQTT console logging.")
print("You'll see colorful console output for MQTT events:")
print(" 🔗 Connection status")
print(" 📋 Topic subscriptions")
print(" 📡 Incoming messages")
print(" ⚠️ Disconnections and errors")
print()
print("Press Ctrl+C to stop the demo.")
print("=" * 50)
# Setup signal handler
signal.signal(signal.SIGINT, signal_handler)
try:
# Setup logging with INFO level for console visibility
setup_logging(log_level="INFO", log_file="mqtt_demo.log")
# Load configuration
config = Config()
# Initialize components
state_manager = StateManager()
event_system = EventSystem()
# Create MQTT client
mqtt_client = MQTTClient(config, state_manager, event_system)
print(f"\n🔧 Configuration:")
print(f" Broker: {config.mqtt.broker_host}:{config.mqtt.broker_port}")
print(f" Topics: {list(config.mqtt.topics.values())}")
print()
# Start MQTT client
print("🚀 Starting MQTT client...")
if mqtt_client.start():
print("✅ MQTT client started successfully!")
print("\n👀 Watching for MQTT messages... (Press Ctrl+C to stop)")
print("-" * 50)
# Keep running and show periodic status
start_time = time.time()
last_status_time = start_time
while True:
time.sleep(1)
# Show status every 30 seconds
current_time = time.time()
if current_time - last_status_time >= 30:
status = mqtt_client.get_status()
uptime = current_time - start_time
print(f"\n📊 Status Update (uptime: {uptime:.0f}s):")
print(f" Connected: {status['connected']}")
print(f" Messages: {status['message_count']}")
print(f" Errors: {status['error_count']}")
if status['last_message_time']:
print(f" Last Message: {status['last_message_time']}")
print("-" * 50)
last_status_time = current_time
else:
print("❌ Failed to start MQTT client")
print(" Check your MQTT broker configuration in config.json")
print(" Make sure the broker is running and accessible")
except KeyboardInterrupt:
print("\n🛑 Demo stopped by user")
except Exception as e:
print(f"\n❌ Error: {e}")
finally:
# Cleanup
try:
if 'mqtt_client' in locals():
mqtt_client.stop()
print("🔌 MQTT client stopped")
except:
pass
print("\n👋 Demo completed!")
print("\n💡 To run the full system with this enhanced logging:")
print(" python main.py")
if __name__ == "__main__":
main()

View File

@@ -0,0 +1,111 @@
#coding=utf-8
import mvsdk
def main():
# 枚举相机
DevList = mvsdk.CameraEnumerateDevice()
nDev = len(DevList)
if nDev < 1:
print("No camera was found!")
return
for i, DevInfo in enumerate(DevList):
print("{}: {} {}".format(i, DevInfo.GetFriendlyName(), DevInfo.GetPortType()))
i = 0 if nDev == 1 else int(input("Select camera: "))
DevInfo = DevList[i]
print(DevInfo)
# 打开相机
hCamera = 0
try:
hCamera = mvsdk.CameraInit(DevInfo, -1, -1)
except mvsdk.CameraException as e:
print("CameraInit Failed({}): {}".format(e.error_code, e.message) )
return
# 获取相机特性描述
cap = mvsdk.CameraGetCapability(hCamera)
PrintCapbility(cap)
# 判断是黑白相机还是彩色相机
monoCamera = (cap.sIspCapacity.bMonoSensor != 0)
# 黑白相机让ISP直接输出MONO数据而不是扩展成R=G=B的24位灰度
if monoCamera:
mvsdk.CameraSetIspOutFormat(hCamera, mvsdk.CAMERA_MEDIA_TYPE_MONO8)
# 相机模式切换成连续采集
mvsdk.CameraSetTriggerMode(hCamera, 0)
# 手动曝光曝光时间30ms
mvsdk.CameraSetAeState(hCamera, 0)
mvsdk.CameraSetExposureTime(hCamera, 30 * 1000)
# 让SDK内部取图线程开始工作
mvsdk.CameraPlay(hCamera)
# 计算RGB buffer所需的大小这里直接按照相机的最大分辨率来分配
FrameBufferSize = cap.sResolutionRange.iWidthMax * cap.sResolutionRange.iHeightMax * (1 if monoCamera else 3)
# 分配RGB buffer用来存放ISP输出的图像
# 备注从相机传输到PC端的是RAW数据在PC端通过软件ISP转为RGB数据如果是黑白相机就不需要转换格式但是ISP还有其它处理所以也需要分配这个buffer
pFrameBuffer = mvsdk.CameraAlignMalloc(FrameBufferSize, 16)
# 从相机取一帧图片
try:
pRawData, FrameHead = mvsdk.CameraGetImageBuffer(hCamera, 2000)
mvsdk.CameraImageProcess(hCamera, pRawData, pFrameBuffer, FrameHead)
mvsdk.CameraReleaseImageBuffer(hCamera, pRawData)
# 此时图片已经存储在pFrameBuffer中对于彩色相机pFrameBuffer=RGB数据黑白相机pFrameBuffer=8位灰度数据
# 该示例中我们只是把图片保存到硬盘文件中
status = mvsdk.CameraSaveImage(hCamera, "./grab.bmp", pFrameBuffer, FrameHead, mvsdk.FILE_BMP, 100)
if status == mvsdk.CAMERA_STATUS_SUCCESS:
print("Save image successfully. image_size = {}X{}".format(FrameHead.iWidth, FrameHead.iHeight) )
else:
print("Save image failed. err={}".format(status) )
except mvsdk.CameraException as e:
print("CameraGetImageBuffer failed({}): {}".format(e.error_code, e.message) )
# 关闭相机
mvsdk.CameraUnInit(hCamera)
# 释放帧缓存
mvsdk.CameraAlignFree(pFrameBuffer)
def PrintCapbility(cap):
for i in range(cap.iTriggerDesc):
desc = cap.pTriggerDesc[i]
print("{}: {}".format(desc.iIndex, desc.GetDescription()) )
for i in range(cap.iImageSizeDesc):
desc = cap.pImageSizeDesc[i]
print("{}: {}".format(desc.iIndex, desc.GetDescription()) )
for i in range(cap.iClrTempDesc):
desc = cap.pClrTempDesc[i]
print("{}: {}".format(desc.iIndex, desc.GetDescription()) )
for i in range(cap.iMediaTypeDesc):
desc = cap.pMediaTypeDesc[i]
print("{}: {}".format(desc.iIndex, desc.GetDescription()) )
for i in range(cap.iFrameSpeedDesc):
desc = cap.pFrameSpeedDesc[i]
print("{}: {}".format(desc.iIndex, desc.GetDescription()) )
for i in range(cap.iPackLenDesc):
desc = cap.pPackLenDesc[i]
print("{}: {}".format(desc.iIndex, desc.GetDescription()) )
for i in range(cap.iPresetLut):
desc = cap.pPresetLutDesc[i]
print("{}: {}".format(desc.iIndex, desc.GetDescription()) )
for i in range(cap.iAeAlmSwDesc):
desc = cap.pAeAlmSwDesc[i]
print("{}: {}".format(desc.iIndex, desc.GetDescription()) )
for i in range(cap.iAeAlmHdDesc):
desc = cap.pAeAlmHdDesc[i]
print("{}: {}".format(desc.iIndex, desc.GetDescription()) )
for i in range(cap.iBayerDecAlmSwDesc):
desc = cap.pBayerDecAlmSwDesc[i]
print("{}: {}".format(desc.iIndex, desc.GetDescription()) )
for i in range(cap.iBayerDecAlmHdDesc):
desc = cap.pBayerDecAlmHdDesc[i]
print("{}: {}".format(desc.iIndex, desc.GetDescription()) )
main()

View File

@@ -0,0 +1,234 @@
#!/usr/bin/env python3
"""
MQTT Publisher Test Script for USDA Vision Camera System
This script allows you to manually publish test messages to the MQTT topics
to simulate machine state changes for testing purposes.
Usage:
python mqtt_publisher_test.py
The script provides an interactive menu to:
1. Send 'on' state to vibratory conveyor
2. Send 'off' state to vibratory conveyor
3. Send 'on' state to blower separator
4. Send 'off' state to blower separator
5. Send custom message
"""
import paho.mqtt.client as mqtt
import time
import sys
from datetime import datetime
# MQTT Configuration (matching your system config)
MQTT_BROKER_HOST = "192.168.1.110"
MQTT_BROKER_PORT = 1883
MQTT_USERNAME = None # Set if your broker requires authentication
MQTT_PASSWORD = None # Set if your broker requires authentication
# Topics (from your config.json)
MQTT_TOPICS = {
"vibratory_conveyor": "vision/vibratory_conveyor/state",
"blower_separator": "vision/blower_separator/state"
}
class MQTTPublisher:
def __init__(self):
self.client = None
self.connected = False
def setup_client(self):
"""Setup MQTT client"""
try:
self.client = mqtt.Client(mqtt.CallbackAPIVersion.VERSION1)
self.client.on_connect = self.on_connect
self.client.on_disconnect = self.on_disconnect
self.client.on_publish = self.on_publish
if MQTT_USERNAME and MQTT_PASSWORD:
self.client.username_pw_set(MQTT_USERNAME, MQTT_PASSWORD)
return True
except Exception as e:
print(f"❌ Error setting up MQTT client: {e}")
return False
def connect(self):
"""Connect to MQTT broker"""
try:
print(f"🔗 Connecting to MQTT broker at {MQTT_BROKER_HOST}:{MQTT_BROKER_PORT}...")
self.client.connect(MQTT_BROKER_HOST, MQTT_BROKER_PORT, 60)
self.client.loop_start() # Start background loop
# Wait for connection
timeout = 10
start_time = time.time()
while not self.connected and (time.time() - start_time) < timeout:
time.sleep(0.1)
return self.connected
except Exception as e:
print(f"❌ Failed to connect to MQTT broker: {e}")
return False
def disconnect(self):
"""Disconnect from MQTT broker"""
if self.client:
self.client.loop_stop()
self.client.disconnect()
def on_connect(self, client, userdata, flags, rc):
"""Callback when client connects"""
if rc == 0:
self.connected = True
print(f"✅ Connected to MQTT broker successfully!")
else:
self.connected = False
print(f"❌ Connection failed with return code {rc}")
def on_disconnect(self, client, userdata, rc):
"""Callback when client disconnects"""
self.connected = False
print(f"🔌 Disconnected from MQTT broker")
def on_publish(self, client, userdata, mid):
"""Callback when message is published"""
print(f"📤 Message published successfully (mid: {mid})")
def publish_message(self, topic, payload):
"""Publish a message to a topic"""
if not self.connected:
print("❌ Not connected to MQTT broker")
return False
try:
timestamp = datetime.now().strftime('%H:%M:%S.%f')[:-3]
print(f"📡 [{timestamp}] Publishing message:")
print(f" 📍 Topic: {topic}")
print(f" 📄 Payload: '{payload}'")
result = self.client.publish(topic, payload)
if result.rc == mqtt.MQTT_ERR_SUCCESS:
print(f"✅ Message queued for publishing")
return True
else:
print(f"❌ Failed to publish message (error: {result.rc})")
return False
except Exception as e:
print(f"❌ Error publishing message: {e}")
return False
def show_menu(self):
"""Show interactive menu"""
print("\n" + "=" * 50)
print("🎛️ MQTT PUBLISHER TEST MENU")
print("=" * 50)
print("1. Send 'on' to vibratory conveyor")
print("2. Send 'off' to vibratory conveyor")
print("3. Send 'on' to blower separator")
print("4. Send 'off' to blower separator")
print("5. Send custom message")
print("6. Show current topics")
print("0. Exit")
print("-" * 50)
def handle_menu_choice(self, choice):
"""Handle menu selection"""
if choice == "1":
self.publish_message(MQTT_TOPICS["vibratory_conveyor"], "on")
elif choice == "2":
self.publish_message(MQTT_TOPICS["vibratory_conveyor"], "off")
elif choice == "3":
self.publish_message(MQTT_TOPICS["blower_separator"], "on")
elif choice == "4":
self.publish_message(MQTT_TOPICS["blower_separator"], "off")
elif choice == "5":
self.custom_message()
elif choice == "6":
self.show_topics()
elif choice == "0":
return False
else:
print("❌ Invalid choice. Please try again.")
return True
def custom_message(self):
"""Send custom message"""
print("\n📝 Custom Message")
print("Available topics:")
for i, (name, topic) in enumerate(MQTT_TOPICS.items(), 1):
print(f" {i}. {name}: {topic}")
try:
topic_choice = input("Select topic (1-2): ").strip()
if topic_choice == "1":
topic = MQTT_TOPICS["vibratory_conveyor"]
elif topic_choice == "2":
topic = MQTT_TOPICS["blower_separator"]
else:
print("❌ Invalid topic choice")
return
payload = input("Enter message payload: ").strip()
if payload:
self.publish_message(topic, payload)
else:
print("❌ Empty payload, message not sent")
except KeyboardInterrupt:
print("\n❌ Cancelled")
def show_topics(self):
"""Show configured topics"""
print("\n📋 Configured Topics:")
for name, topic in MQTT_TOPICS.items():
print(f" 🏭 {name}: {topic}")
def run(self):
"""Main interactive loop"""
print("📤 MQTT Publisher Test")
print("=" * 50)
print(f"🎯 Broker: {MQTT_BROKER_HOST}:{MQTT_BROKER_PORT}")
if not self.setup_client():
return False
if not self.connect():
print("❌ Failed to connect to MQTT broker")
return False
try:
while True:
self.show_menu()
choice = input("Enter your choice: ").strip()
if not self.handle_menu_choice(choice):
break
except KeyboardInterrupt:
print("\n\n🛑 Interrupted by user")
except Exception as e:
print(f"\n❌ Error: {e}")
finally:
self.disconnect()
print("👋 Goodbye!")
return True
def main():
"""Main function"""
publisher = MQTTPublisher()
try:
publisher.run()
except Exception as e:
print(f"❌ Unexpected error: {e}")
sys.exit(1)
if __name__ == "__main__":
main()

View File

@@ -0,0 +1,242 @@
#!/usr/bin/env python3
"""
MQTT Test Script for USDA Vision Camera System
This script tests MQTT message reception by connecting to the broker
and listening for messages on the configured topics.
Usage:
python mqtt_test.py
The script will:
1. Connect to the MQTT broker
2. Subscribe to all configured topics
3. Display received messages with timestamps
4. Show connection status and statistics
"""
import paho.mqtt.client as mqtt
import time
import json
import signal
import sys
from datetime import datetime
from typing import Dict, Optional
# MQTT Configuration (matching your system config)
MQTT_BROKER_HOST = "192.168.1.110"
MQTT_BROKER_PORT = 1883
MQTT_USERNAME = None # Set if your broker requires authentication
MQTT_PASSWORD = None # Set if your broker requires authentication
# Topics to monitor (from your config.json)
MQTT_TOPICS = {
"vibratory_conveyor": "vision/vibratory_conveyor/state",
"blower_separator": "vision/blower_separator/state"
}
class MQTTTester:
def __init__(self):
self.client: Optional[mqtt.Client] = None
self.connected = False
self.message_count = 0
self.start_time = None
self.last_message_time = None
self.received_messages = []
def setup_client(self):
"""Setup MQTT client with callbacks"""
try:
# Create MQTT client
self.client = mqtt.Client(mqtt.CallbackAPIVersion.VERSION1)
# Set callbacks
self.client.on_connect = self.on_connect
self.client.on_disconnect = self.on_disconnect
self.client.on_message = self.on_message
self.client.on_subscribe = self.on_subscribe
# Set authentication if provided
if MQTT_USERNAME and MQTT_PASSWORD:
self.client.username_pw_set(MQTT_USERNAME, MQTT_PASSWORD)
print(f"🔐 Using authentication: {MQTT_USERNAME}")
return True
except Exception as e:
print(f"❌ Error setting up MQTT client: {e}")
return False
def connect(self):
"""Connect to MQTT broker"""
try:
print(f"🔗 Connecting to MQTT broker at {MQTT_BROKER_HOST}:{MQTT_BROKER_PORT}...")
self.client.connect(MQTT_BROKER_HOST, MQTT_BROKER_PORT, 60)
return True
except Exception as e:
print(f"❌ Failed to connect to MQTT broker: {e}")
return False
def on_connect(self, client, userdata, flags, rc):
"""Callback when client connects to broker"""
if rc == 0:
self.connected = True
self.start_time = datetime.now()
print(f"✅ Successfully connected to MQTT broker!")
print(f"📅 Connection time: {self.start_time.strftime('%Y-%m-%d %H:%M:%S')}")
print()
# Subscribe to all topics
print("📋 Subscribing to topics:")
for machine_name, topic in MQTT_TOPICS.items():
result, mid = client.subscribe(topic)
if result == mqtt.MQTT_ERR_SUCCESS:
print(f"{machine_name}: {topic}")
else:
print(f"{machine_name}: {topic} (error: {result})")
print()
print("🎧 Listening for MQTT messages...")
print(" (Manually turn machines on/off to trigger messages)")
print(" (Press Ctrl+C to stop)")
print("-" * 60)
else:
self.connected = False
print(f"❌ Connection failed with return code {rc}")
print(" Return codes:")
print(" 0: Connection successful")
print(" 1: Connection refused - incorrect protocol version")
print(" 2: Connection refused - invalid client identifier")
print(" 3: Connection refused - server unavailable")
print(" 4: Connection refused - bad username or password")
print(" 5: Connection refused - not authorised")
def on_disconnect(self, client, userdata, rc):
"""Callback when client disconnects from broker"""
self.connected = False
if rc != 0:
print(f"🔌 Unexpected disconnection from MQTT broker (code: {rc})")
else:
print(f"🔌 Disconnected from MQTT broker")
def on_subscribe(self, client, userdata, mid, granted_qos):
"""Callback when subscription is confirmed"""
print(f"📋 Subscription confirmed (mid: {mid}, QoS: {granted_qos})")
def on_message(self, client, userdata, msg):
"""Callback when a message is received"""
try:
# Decode message
topic = msg.topic
payload = msg.payload.decode("utf-8").strip()
timestamp = datetime.now()
# Update statistics
self.message_count += 1
self.last_message_time = timestamp
# Find machine name
machine_name = "unknown"
for name, configured_topic in MQTT_TOPICS.items():
if topic == configured_topic:
machine_name = name
break
# Store message
message_data = {
"timestamp": timestamp,
"topic": topic,
"machine": machine_name,
"payload": payload,
"message_number": self.message_count
}
self.received_messages.append(message_data)
# Display message
time_str = timestamp.strftime('%H:%M:%S.%f')[:-3] # Include milliseconds
print(f"📡 [{time_str}] Message #{self.message_count}")
print(f" 🏭 Machine: {machine_name}")
print(f" 📍 Topic: {topic}")
print(f" 📄 Payload: '{payload}'")
print(f" 📊 Total messages: {self.message_count}")
print("-" * 60)
except Exception as e:
print(f"❌ Error processing message: {e}")
def show_statistics(self):
"""Show connection and message statistics"""
print("\n" + "=" * 60)
print("📊 MQTT TEST STATISTICS")
print("=" * 60)
if self.start_time:
runtime = datetime.now() - self.start_time
print(f"⏱️ Runtime: {runtime}")
print(f"🔗 Connected: {'Yes' if self.connected else 'No'}")
print(f"📡 Messages received: {self.message_count}")
if self.last_message_time:
print(f"🕐 Last message: {self.last_message_time.strftime('%Y-%m-%d %H:%M:%S')}")
if self.received_messages:
print(f"\n📋 Message Summary:")
for msg in self.received_messages[-5:]: # Show last 5 messages
time_str = msg["timestamp"].strftime('%H:%M:%S')
print(f" [{time_str}] {msg['machine']}: {msg['payload']}")
print("=" * 60)
def run(self):
"""Main test loop"""
print("🧪 MQTT Message Reception Test")
print("=" * 60)
print(f"🎯 Broker: {MQTT_BROKER_HOST}:{MQTT_BROKER_PORT}")
print(f"📋 Topics: {list(MQTT_TOPICS.values())}")
print()
# Setup signal handler for graceful shutdown
def signal_handler(sig, frame):
print(f"\n\n🛑 Received interrupt signal, shutting down...")
self.show_statistics()
if self.client and self.connected:
self.client.disconnect()
sys.exit(0)
signal.signal(signal.SIGINT, signal_handler)
# Setup and connect
if not self.setup_client():
return False
if not self.connect():
return False
# Start the client loop
try:
self.client.loop_forever()
except KeyboardInterrupt:
pass
except Exception as e:
print(f"❌ Error in main loop: {e}")
return True
def main():
"""Main function"""
tester = MQTTTester()
try:
success = tester.run()
if not success:
print("❌ Test failed")
sys.exit(1)
except Exception as e:
print(f"❌ Unexpected error: {e}")
sys.exit(1)
if __name__ == "__main__":
main()

View File

@@ -0,0 +1,4 @@
mvsdk.py: 相机SDK接口库参考文档 WindowsSDK安装目录\Document\MVSDK_API_CHS.chm
grab.py: 使用SDK采集图片并保存到硬盘文件
cv_grab.py: 使用SDK采集图片转换为opencv的图像格式

View File

@@ -0,0 +1,607 @@
{
"cells": [
{
"cell_type": "markdown",
"id": "intro",
"metadata": {},
"source": [
"# Camera Status and Availability Testing\n",
"\n",
"This notebook tests various methods to check camera status and availability before attempting to capture images.\n",
"\n",
"## Key Functions to Test:\n",
"- `CameraIsOpened()` - Check if camera is already opened by another process\n",
"- `CameraInit()` - Try to initialize and catch specific error codes\n",
"- `CameraGetImageBuffer()` - Test actual image capture with timeout\n",
"- Error code analysis for different failure scenarios"
]
},
{
"cell_type": "code",
"execution_count": 26,
"id": "imports",
"metadata": {},
"outputs": [
{
"name": "stdout",
"output_type": "stream",
"text": [
"Libraries imported successfully!\n",
"Platform: Linux\n"
]
}
],
"source": [
"# Import required libraries\n",
"import os\n",
"import sys\n",
"import time\n",
"import numpy as np\n",
"import cv2\n",
"import platform\n",
"from datetime import datetime\n",
"\n",
"# Add the python demo directory to path to import mvsdk\n",
"sys.path.append('../python demo')\n",
"import mvsdk\n",
"\n",
"print(\"Libraries imported successfully!\")\n",
"print(f\"Platform: {platform.system()}\")"
]
},
{
"cell_type": "code",
"execution_count": 27,
"id": "error-codes",
"metadata": {},
"outputs": [
{
"name": "stdout",
"output_type": "stream",
"text": [
"Relevant Camera Status Error Codes:\n",
"========================================\n",
"CAMERA_STATUS_SUCCESS: 0\n",
"CAMERA_STATUS_DEVICE_IS_OPENED: -18\n",
"CAMERA_STATUS_DEVICE_IS_CLOSED: -19\n",
"CAMERA_STATUS_ACCESS_DENY: -45\n",
"CAMERA_STATUS_DEVICE_LOST: -38\n",
"CAMERA_STATUS_TIME_OUT: -12\n",
"CAMERA_STATUS_BUSY: -28\n",
"CAMERA_STATUS_NO_DEVICE_FOUND: -16\n"
]
}
],
"source": [
"# Let's examine the relevant error codes from the SDK\n",
"print(\"Relevant Camera Status Error Codes:\")\n",
"print(\"=\" * 40)\n",
"print(f\"CAMERA_STATUS_SUCCESS: {mvsdk.CAMERA_STATUS_SUCCESS}\")\n",
"print(f\"CAMERA_STATUS_DEVICE_IS_OPENED: {mvsdk.CAMERA_STATUS_DEVICE_IS_OPENED}\")\n",
"print(f\"CAMERA_STATUS_DEVICE_IS_CLOSED: {mvsdk.CAMERA_STATUS_DEVICE_IS_CLOSED}\")\n",
"print(f\"CAMERA_STATUS_ACCESS_DENY: {mvsdk.CAMERA_STATUS_ACCESS_DENY}\")\n",
"print(f\"CAMERA_STATUS_DEVICE_LOST: {mvsdk.CAMERA_STATUS_DEVICE_LOST}\")\n",
"print(f\"CAMERA_STATUS_TIME_OUT: {mvsdk.CAMERA_STATUS_TIME_OUT}\")\n",
"print(f\"CAMERA_STATUS_BUSY: {mvsdk.CAMERA_STATUS_BUSY}\")\n",
"print(f\"CAMERA_STATUS_NO_DEVICE_FOUND: {mvsdk.CAMERA_STATUS_NO_DEVICE_FOUND}\")"
]
},
{
"cell_type": "code",
"execution_count": 28,
"id": "status-functions",
"metadata": {},
"outputs": [
{
"name": "stdout",
"output_type": "stream",
"text": [
"Camera Availability Check\n",
"==============================\n",
"✓ SDK initialized successfully\n",
"✓ Found 2 camera(s)\n",
" 0: Blower-Yield-Cam (192.168.1.165-192.168.1.54)\n",
" 1: Cracker-Cam (192.168.1.167-192.168.1.54)\n",
"\n",
"Testing camera 0: Blower-Yield-Cam\n",
"✓ Camera is available (not opened by another process)\n",
"✓ Camera initialized successfully\n",
"✓ Camera closed after testing\n",
"\n",
"Testing camera 1: Cracker-Cam\n",
"✓ Camera is available (not opened by another process)\n",
"✓ Camera initialized successfully\n",
"✓ Camera closed after testing\n",
"\n",
"Results for 2 cameras:\n",
" Camera 0: AVAILABLE\n",
" Camera 1: AVAILABLE\n"
]
}
],
"source": [
"def check_camera_availability():\n",
" \"\"\"\n",
" Comprehensive camera availability check\n",
" \"\"\"\n",
" print(\"Camera Availability Check\")\n",
" print(\"=\" * 30)\n",
" \n",
" # Step 1: Initialize SDK\n",
" try:\n",
" mvsdk.CameraSdkInit(1)\n",
" print(\"✓ SDK initialized successfully\")\n",
" except Exception as e:\n",
" print(f\"✗ SDK initialization failed: {e}\")\n",
" return None, \"SDK_INIT_FAILED\"\n",
" \n",
" # Step 2: Enumerate cameras\n",
" try:\n",
" DevList = mvsdk.CameraEnumerateDevice()\n",
" nDev = len(DevList)\n",
" print(f\"✓ Found {nDev} camera(s)\")\n",
" \n",
" if nDev < 1:\n",
" print(\"✗ No cameras detected\")\n",
" return None, \"NO_CAMERAS\"\n",
" \n",
" for i, DevInfo in enumerate(DevList):\n",
" print(f\" {i}: {DevInfo.GetFriendlyName()} ({DevInfo.GetPortType()})\")\n",
" \n",
" except Exception as e:\n",
" print(f\"✗ Camera enumeration failed: {e}\")\n",
" return None, \"ENUM_FAILED\"\n",
" \n",
" # Step 3: Check all cameras\n",
" camera_results = []\n",
" \n",
" for i, DevInfo in enumerate(DevList):\n",
" print(f\"\\nTesting camera {i}: {DevInfo.GetFriendlyName()}\")\n",
" \n",
" # Check if camera is already opened\n",
" try:\n",
" is_opened = mvsdk.CameraIsOpened(DevInfo)\n",
" if is_opened:\n",
" print(\"✗ Camera is already opened by another process\")\n",
" camera_results.append((DevInfo, \"ALREADY_OPENED\"))\n",
" continue\n",
" else:\n",
" print(\"✓ Camera is available (not opened by another process)\")\n",
" except Exception as e:\n",
" print(f\"⚠ Could not check if camera is opened: {e}\")\n",
" \n",
" # Try to initialize camera\n",
" try:\n",
" hCamera = mvsdk.CameraInit(DevInfo, -1, -1)\n",
" print(\"✓ Camera initialized successfully\")\n",
" camera_results.append((hCamera, \"AVAILABLE\"))\n",
" \n",
" # Close the camera after testing\n",
" try:\n",
" mvsdk.CameraUnInit(hCamera)\n",
" print(\"✓ Camera closed after testing\")\n",
" except Exception as e:\n",
" print(f\"⚠ Warning: Could not close camera: {e}\")\n",
" \n",
" except mvsdk.CameraException as e:\n",
" print(f\"✗ Camera initialization failed: {e.error_code} - {e.message}\")\n",
" \n",
" # Analyze specific error codes\n",
" if e.error_code == mvsdk.CAMERA_STATUS_DEVICE_IS_OPENED:\n",
" camera_results.append((DevInfo, \"DEVICE_OPENED\"))\n",
" elif e.error_code == mvsdk.CAMERA_STATUS_ACCESS_DENY:\n",
" camera_results.append((DevInfo, \"ACCESS_DENIED\"))\n",
" elif e.error_code == mvsdk.CAMERA_STATUS_DEVICE_LOST:\n",
" camera_results.append((DevInfo, \"DEVICE_LOST\"))\n",
" else:\n",
" camera_results.append((DevInfo, f\"INIT_ERROR_{e.error_code}\"))\n",
" \n",
" except Exception as e:\n",
" print(f\"✗ Unexpected error during initialization: {e}\")\n",
" camera_results.append((DevInfo, \"UNEXPECTED_ERROR\"))\n",
" \n",
" return camera_results\n",
"\n",
"# Test the function\n",
"camera_results = check_camera_availability()\n",
"print(f\"\\nResults for {len(camera_results)} cameras:\")\n",
"for i, (camera_info, status) in enumerate(camera_results):\n",
" if hasattr(camera_info, 'GetFriendlyName'):\n",
" name = camera_info.GetFriendlyName()\n",
" else:\n",
" name = f\"Camera {i}\"\n",
" print(f\" {name}: {status}\")"
]
},
{
"cell_type": "code",
"execution_count": 29,
"id": "test-capture-availability",
"metadata": {},
"outputs": [
{
"name": "stdout",
"output_type": "stream",
"text": [
"\n",
"Testing capture readiness for 2 available camera(s):\n",
"\n",
"Testing camera 0 capture readiness...\n",
"\n",
"Testing Camera Capture Readiness\n",
"===================================\n",
"✓ Camera capabilities retrieved\n",
"✓ Camera type: Color\n",
"✓ Basic camera configuration set\n",
"✓ Camera started\n",
"✓ Frame buffer allocated\n",
"\n",
"Testing image capture...\n",
"✓ Image captured successfully: 1280x1024\n",
"✓ Image processed and buffer released\n",
"✓ Cleanup completed\n",
"Capture Ready for Blower-Yield-Cam: True\n",
"\n",
"Testing camera 1 capture readiness...\n",
"\n",
"Testing Camera Capture Readiness\n",
"===================================\n",
"✓ Camera capabilities retrieved\n",
"✓ Camera type: Color\n",
"✓ Basic camera configuration set\n",
"✓ Camera started\n",
"✓ Frame buffer allocated\n",
"\n",
"Testing image capture...\n",
"✓ Image captured successfully: 1280x1024\n",
"✓ Image processed and buffer released\n",
"✓ Cleanup completed\n",
"Capture Ready for Cracker-Cam: True\n"
]
}
],
"source": [
"def test_camera_capture_readiness(hCamera):\n",
" \"\"\"\n",
" Test if camera is ready for image capture\n",
" \"\"\"\n",
" if not isinstance(hCamera, int):\n",
" print(\"Camera not properly initialized, skipping capture test\")\n",
" return False\n",
" \n",
" print(\"\\nTesting Camera Capture Readiness\")\n",
" print(\"=\" * 35)\n",
" \n",
" try:\n",
" # Get camera capabilities\n",
" cap = mvsdk.CameraGetCapability(hCamera)\n",
" print(\"✓ Camera capabilities retrieved\")\n",
" \n",
" # Check camera type\n",
" monoCamera = (cap.sIspCapacity.bMonoSensor != 0)\n",
" print(f\"✓ Camera type: {'Monochrome' if monoCamera else 'Color'}\")\n",
" \n",
" # Set basic configuration\n",
" if monoCamera:\n",
" mvsdk.CameraSetIspOutFormat(hCamera, mvsdk.CAMERA_MEDIA_TYPE_MONO8)\n",
" else:\n",
" mvsdk.CameraSetIspOutFormat(hCamera, mvsdk.CAMERA_MEDIA_TYPE_BGR8)\n",
" \n",
" mvsdk.CameraSetTriggerMode(hCamera, 0) # Continuous mode\n",
" mvsdk.CameraSetAeState(hCamera, 0) # Manual exposure\n",
" mvsdk.CameraSetExposureTime(hCamera, 5000) # 5ms exposure\n",
" print(\"✓ Basic camera configuration set\")\n",
" \n",
" # Start camera\n",
" mvsdk.CameraPlay(hCamera)\n",
" print(\"✓ Camera started\")\n",
" \n",
" # Allocate buffer\n",
" FrameBufferSize = cap.sResolutionRange.iWidthMax * cap.sResolutionRange.iHeightMax * (1 if monoCamera else 3)\n",
" pFrameBuffer = mvsdk.CameraAlignMalloc(FrameBufferSize, 16)\n",
" print(\"✓ Frame buffer allocated\")\n",
" \n",
" # Test image capture with short timeout\n",
" print(\"\\nTesting image capture...\")\n",
" try:\n",
" pRawData, FrameHead = mvsdk.CameraGetImageBuffer(hCamera, 1000) # 1 second timeout\n",
" print(f\"✓ Image captured successfully: {FrameHead.iWidth}x{FrameHead.iHeight}\")\n",
" \n",
" # Process and release\n",
" mvsdk.CameraImageProcess(hCamera, pRawData, pFrameBuffer, FrameHead)\n",
" mvsdk.CameraReleaseImageBuffer(hCamera, pRawData)\n",
" print(\"✓ Image processed and buffer released\")\n",
" \n",
" capture_success = True\n",
" \n",
" except mvsdk.CameraException as e:\n",
" print(f\"✗ Image capture failed: {e.error_code} - {e.message}\")\n",
" \n",
" if e.error_code == mvsdk.CAMERA_STATUS_TIME_OUT:\n",
" print(\" → Camera timeout - may be busy or not streaming\")\n",
" elif e.error_code == mvsdk.CAMERA_STATUS_DEVICE_LOST:\n",
" print(\" → Device lost - camera disconnected\")\n",
" elif e.error_code == mvsdk.CAMERA_STATUS_BUSY:\n",
" print(\" → Camera busy - may be used by another process\")\n",
" \n",
" capture_success = False\n",
" \n",
" # Cleanup\n",
" mvsdk.CameraAlignFree(pFrameBuffer)\n",
" print(\"✓ Cleanup completed\")\n",
" \n",
" return capture_success\n",
" \n",
" except Exception as e:\n",
" print(f\"✗ Capture readiness test failed: {e}\")\n",
" return False\n",
"\n",
"# Test capture readiness for available cameras\n",
"available_cameras = [(cam, stat) for cam, stat in camera_results if stat == \"AVAILABLE\"]\n",
"\n",
"if available_cameras:\n",
" print(f\"\\nTesting capture readiness for {len(available_cameras)} available camera(s):\")\n",
" for i, (camera_handle, status) in enumerate(available_cameras):\n",
" if hasattr(camera_handle, 'GetFriendlyName'):\n",
" # This shouldn't happen for AVAILABLE cameras, but just in case\n",
" print(f\"\\nCamera {i}: Invalid handle\")\n",
" continue\n",
" \n",
" print(f\"\\nTesting camera {i} capture readiness...\")\n",
" # Re-initialize the camera for testing since we closed it earlier\n",
" try:\n",
" # Find the camera info from the original results\n",
" DevList = mvsdk.CameraEnumerateDevice()\n",
" if i < len(DevList):\n",
" DevInfo = DevList[i]\n",
" hCamera = mvsdk.CameraInit(DevInfo, -1, -1)\n",
" capture_ready = test_camera_capture_readiness(hCamera)\n",
" print(f\"Capture Ready for {DevInfo.GetFriendlyName()}: {capture_ready}\")\n",
" mvsdk.CameraUnInit(hCamera)\n",
" else:\n",
" print(f\"Could not re-initialize camera {i}\")\n",
" except Exception as e:\n",
" print(f\"Error testing camera {i}: {e}\")\n",
"else:\n",
" print(\"\\nNo cameras are available for capture testing\")\n",
" print(\"Camera statuses:\")\n",
" for i, (cam_info, status) in enumerate(camera_results):\n",
" if hasattr(cam_info, 'GetFriendlyName'):\n",
" name = cam_info.GetFriendlyName()\n",
" else:\n",
" name = f\"Camera {i}\"\n",
" print(f\" {name}: {status}\")"
]
},
{
"cell_type": "code",
"execution_count": 30,
"id": "comprehensive-check",
"metadata": {},
"outputs": [
{
"name": "stdout",
"output_type": "stream",
"text": [
"\n",
"==================================================\n",
"COMPREHENSIVE CAMERA CHECK\n",
"==================================================\n",
"Camera Availability Check\n",
"==============================\n",
"✓ SDK initialized successfully\n",
"✓ Found 2 camera(s)\n",
" 0: Blower-Yield-Cam (192.168.1.165-192.168.1.54)\n",
" 1: Cracker-Cam (192.168.1.167-192.168.1.54)\n",
"\n",
"Testing camera 0: Blower-Yield-Cam\n",
"✓ Camera is available (not opened by another process)\n",
"✓ Camera initialized successfully\n",
"✓ Camera closed after testing\n",
"\n",
"Testing camera 1: Cracker-Cam\n",
"✓ Camera is available (not opened by another process)\n",
"✓ Camera initialized successfully\n",
"✓ Camera closed after testing\n",
"\n",
"==================================================\n",
"FINAL RESULTS:\n",
"Camera Available: False\n",
"Capture Ready: False\n",
"Status: (42, 'AVAILABLE')\n",
"==================================================\n"
]
}
],
"source": [
"def comprehensive_camera_check():\n",
" \"\"\"\n",
" Complete camera availability and readiness check\n",
" Returns: (available, ready, handle_or_info, status_message)\n",
" \"\"\"\n",
" # Check availability\n",
" handle_or_info, status = check_camera_availability()\n",
" \n",
" available = status == \"AVAILABLE\"\n",
" ready = False\n",
" \n",
" if available:\n",
" # Test capture readiness\n",
" ready = test_camera_capture_readiness(handle_or_info)\n",
" \n",
" # Close camera after testing\n",
" try:\n",
" mvsdk.CameraUnInit(handle_or_info)\n",
" print(\"✓ Camera closed after testing\")\n",
" except:\n",
" pass\n",
" \n",
" return available, ready, handle_or_info, status\n",
"\n",
"# Run comprehensive check\n",
"print(\"\\n\" + \"=\" * 50)\n",
"print(\"COMPREHENSIVE CAMERA CHECK\")\n",
"print(\"=\" * 50)\n",
"\n",
"available, ready, info, status_msg = comprehensive_camera_check()\n",
"\n",
"print(\"\\n\" + \"=\" * 50)\n",
"print(\"FINAL RESULTS:\")\n",
"print(f\"Camera Available: {available}\")\n",
"print(f\"Capture Ready: {ready}\")\n",
"print(f\"Status: {status_msg}\")\n",
"print(\"=\" * 50)"
]
},
{
"cell_type": "code",
"execution_count": 31,
"id": "status-check-function",
"metadata": {},
"outputs": [
{
"name": "stdout",
"output_type": "stream",
"text": [
"\n",
"Testing Simple Camera Ready Check:\n",
"========================================\n",
"Ready: True\n",
"Message: Camera 'Blower-Yield-Cam' is ready for capture\n",
"Camera: Blower-Yield-Cam\n"
]
}
],
"source": [
"def is_camera_ready_for_capture():\n",
" \"\"\"\n",
" Simple function to check if camera is ready for capture.\n",
" Returns: (ready: bool, message: str, camera_info: object or None)\n",
" \n",
" This is the function you can use in your main capture script.\n",
" \"\"\"\n",
" try:\n",
" # Initialize SDK\n",
" mvsdk.CameraSdkInit(1)\n",
" \n",
" # Enumerate cameras\n",
" DevList = mvsdk.CameraEnumerateDevice()\n",
" if len(DevList) < 1:\n",
" return False, \"No cameras found\", None\n",
" \n",
" DevInfo = DevList[0]\n",
" \n",
" # Check if already opened\n",
" try:\n",
" if mvsdk.CameraIsOpened(DevInfo):\n",
" return False, f\"Camera '{DevInfo.GetFriendlyName()}' is already opened by another process\", DevInfo\n",
" except:\n",
" pass # Some cameras might not support this check\n",
" \n",
" # Try to initialize\n",
" try:\n",
" hCamera = mvsdk.CameraInit(DevInfo, -1, -1)\n",
" \n",
" # Quick capture test\n",
" try:\n",
" # Basic setup\n",
" mvsdk.CameraSetTriggerMode(hCamera, 0)\n",
" mvsdk.CameraPlay(hCamera)\n",
" \n",
" # Try to get one frame with short timeout\n",
" pRawData, FrameHead = mvsdk.CameraGetImageBuffer(hCamera, 500) # 0.5 second timeout\n",
" mvsdk.CameraReleaseImageBuffer(hCamera, pRawData)\n",
" \n",
" # Success - close and return\n",
" mvsdk.CameraUnInit(hCamera)\n",
" return True, f\"Camera '{DevInfo.GetFriendlyName()}' is ready for capture\", DevInfo\n",
" \n",
" except mvsdk.CameraException as e:\n",
" mvsdk.CameraUnInit(hCamera)\n",
" if e.error_code == mvsdk.CAMERA_STATUS_TIME_OUT:\n",
" return False, \"Camera timeout - may be busy or not streaming properly\", DevInfo\n",
" else:\n",
" return False, f\"Camera capture test failed: {e.message}\", DevInfo\n",
" \n",
" except mvsdk.CameraException as e:\n",
" if e.error_code == mvsdk.CAMERA_STATUS_DEVICE_IS_OPENED:\n",
" return False, f\"Camera '{DevInfo.GetFriendlyName()}' is already in use\", DevInfo\n",
" elif e.error_code == mvsdk.CAMERA_STATUS_ACCESS_DENY:\n",
" return False, f\"Access denied to camera '{DevInfo.GetFriendlyName()}'\", DevInfo\n",
" else:\n",
" return False, f\"Camera initialization failed: {e.message}\", DevInfo\n",
" \n",
" except Exception as e:\n",
" return False, f\"Camera check failed: {str(e)}\", None\n",
"\n",
"# Test the simple function\n",
"print(\"\\nTesting Simple Camera Ready Check:\")\n",
"print(\"=\" * 40)\n",
"\n",
"ready, message, camera_info = is_camera_ready_for_capture()\n",
"print(f\"Ready: {ready}\")\n",
"print(f\"Message: {message}\")\n",
"if camera_info:\n",
" print(f\"Camera: {camera_info.GetFriendlyName()}\")"
]
},
{
"cell_type": "markdown",
"id": "usage-example",
"metadata": {},
"source": [
"## Usage Example\n",
"\n",
"Here's how you can integrate the camera status check into your capture script:\n",
"\n",
"```python\n",
"# Before attempting to capture images\n",
"ready, message, camera_info = is_camera_ready_for_capture()\n",
"\n",
"if not ready:\n",
" print(f\"Camera not ready: {message}\")\n",
" # Handle the error appropriately\n",
" return False\n",
"\n",
"print(f\"Camera ready: {message}\")\n",
"# Proceed with normal capture logic\n",
"```\n",
"\n",
"## Key Findings\n",
"\n",
"1. **`CameraIsOpened()`** - Checks if camera is opened by another process\n",
"2. **`CameraInit()` error codes** - Provide specific failure reasons\n",
"3. **Quick capture test** - Verifies camera is actually streaming\n",
"4. **Timeout handling** - Detects if camera is busy/unresponsive\n",
"\n",
"The most reliable approach is to:\n",
"1. Check if camera exists\n",
"2. Check if it's already opened\n",
"3. Try to initialize it\n",
"4. Test actual image capture with short timeout\n",
"5. Clean up properly"
]
}
],
"metadata": {
"kernelspec": {
"display_name": "USDA-vision-cameras",
"language": "python",
"name": "python3"
},
"language_info": {
"codemirror_mode": {
"name": "ipython",
"version": 3
},
"file_extension": ".py",
"mimetype": "text/x-python",
"name": "python",
"nbconvert_exporter": "python",
"pygments_lexer": "ipython3",
"version": "3.11.2"
}
},
"nbformat": 4,
"nbformat_minor": 5
}

View File

@@ -0,0 +1,495 @@
{
"cells": [
{
"cell_type": "markdown",
"metadata": {},
"source": [
"# GigE Camera Test Setup\n",
"\n",
"This notebook helps you test and configure your GigE cameras for the USDA vision project.\n",
"\n",
"## Key Features:\n",
"- Test camera connectivity\n",
"- Display images inline (no GUI needed)\n",
"- Save test images/videos to `/storage`\n",
"- Configure camera parameters\n",
"- Test recording functionality"
]
},
{
"cell_type": "code",
"execution_count": 1,
"metadata": {},
"outputs": [
{
"name": "stdout",
"output_type": "stream",
"text": [
"✅ All imports successful!\n",
"OpenCV version: 4.11.0\n",
"NumPy version: 2.3.2\n"
]
}
],
"source": [
"import cv2\n",
"import numpy as np\n",
"import matplotlib.pyplot as plt\n",
"import os\n",
"from datetime import datetime\n",
"import time\n",
"from pathlib import Path\n",
"import imageio\n",
"from tqdm import tqdm\n",
"\n",
"# Configure matplotlib for inline display\n",
"plt.rcParams['figure.figsize'] = (12, 8)\n",
"plt.rcParams['image.cmap'] = 'gray'\n",
"\n",
"print(\"✅ All imports successful!\")\n",
"print(f\"OpenCV version: {cv2.__version__}\")\n",
"print(f\"NumPy version: {np.__version__}\")"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"## Utility Functions"
]
},
{
"cell_type": "code",
"execution_count": 2,
"metadata": {},
"outputs": [
{
"name": "stdout",
"output_type": "stream",
"text": [
"✅ Utility functions loaded!\n"
]
}
],
"source": [
"def display_image(image, title=\"Image\", figsize=(10, 8)):\n",
" \"\"\"Display image inline in Jupyter notebook\"\"\"\n",
" plt.figure(figsize=figsize)\n",
" if len(image.shape) == 3:\n",
" # Convert BGR to RGB for matplotlib\n",
" image_rgb = cv2.cvtColor(image, cv2.COLOR_BGR2RGB)\n",
" plt.imshow(image_rgb)\n",
" else:\n",
" plt.imshow(image, cmap='gray')\n",
" plt.title(title)\n",
" plt.axis('off')\n",
" plt.tight_layout()\n",
" plt.show()\n",
"\n",
"def save_image_to_storage(image, filename_prefix=\"test_image\"):\n",
" \"\"\"Save image to /storage with timestamp\"\"\"\n",
" timestamp = datetime.now().strftime(\"%Y%m%d_%H%M%S\")\n",
" filename = f\"{filename_prefix}_{timestamp}.jpg\"\n",
" filepath = f\"/storage/{filename}\"\n",
" \n",
" success = cv2.imwrite(filepath, image)\n",
" if success:\n",
" print(f\"✅ Image saved: {filepath}\")\n",
" return filepath\n",
" else:\n",
" print(f\"❌ Failed to save image: {filepath}\")\n",
" return None\n",
"\n",
"def create_storage_subdir(subdir_name):\n",
" \"\"\"Create subdirectory in /storage\"\"\"\n",
" path = Path(f\"/storage/{subdir_name}\")\n",
" path.mkdir(exist_ok=True)\n",
" print(f\"📁 Directory ready: {path}\")\n",
" return str(path)\n",
"\n",
"def list_available_cameras():\n",
" \"\"\"List all available camera devices\"\"\"\n",
" print(\"🔍 Scanning for available cameras...\")\n",
" available_cameras = []\n",
" \n",
" # Test camera indices 0-10\n",
" for i in range(11):\n",
" cap = cv2.VideoCapture(i)\n",
" if cap.isOpened():\n",
" ret, frame = cap.read()\n",
" if ret:\n",
" available_cameras.append(i)\n",
" print(f\"📷 Camera {i}: Available (Resolution: {frame.shape[1]}x{frame.shape[0]})\")\n",
" cap.release()\n",
" else:\n",
" # Try with different backends for GigE cameras\n",
" cap = cv2.VideoCapture(i, cv2.CAP_GSTREAMER)\n",
" if cap.isOpened():\n",
" ret, frame = cap.read()\n",
" if ret:\n",
" available_cameras.append(i)\n",
" print(f\"📷 Camera {i}: Available via GStreamer (Resolution: {frame.shape[1]}x{frame.shape[0]})\")\n",
" cap.release()\n",
" \n",
" if not available_cameras:\n",
" print(\"❌ No cameras found\")\n",
" \n",
" return available_cameras\n",
"\n",
"print(\"✅ Utility functions loaded!\")"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"## Step 1: Check Storage Directory"
]
},
{
"cell_type": "code",
"execution_count": 3,
"metadata": {},
"outputs": [
{
"name": "stdout",
"output_type": "stream",
"text": [
"Storage directory exists: True\n",
"Storage directory writable: True\n",
"📁 Directory ready: /storage/test_images\n",
"📁 Directory ready: /storage/test_videos\n",
"📁 Directory ready: /storage/camera1\n",
"📁 Directory ready: /storage/camera2\n"
]
}
],
"source": [
"# Check storage directory\n",
"storage_path = Path(\"/storage\")\n",
"print(f\"Storage directory exists: {storage_path.exists()}\")\n",
"print(f\"Storage directory writable: {os.access('/storage', os.W_OK)}\")\n",
"\n",
"# Create test subdirectories\n",
"test_images_dir = create_storage_subdir(\"test_images\")\n",
"test_videos_dir = create_storage_subdir(\"test_videos\")\n",
"camera1_dir = create_storage_subdir(\"camera1\")\n",
"camera2_dir = create_storage_subdir(\"camera2\")"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"## Step 2: Scan for Available Cameras"
]
},
{
"cell_type": "code",
"execution_count": 4,
"metadata": {},
"outputs": [
{
"name": "stdout",
"output_type": "stream",
"text": [
"🔍 Scanning for available cameras...\n",
"❌ No cameras found\n",
"\n",
"📊 Summary: Found 0 camera(s): []\n"
]
},
{
"name": "stderr",
"output_type": "stream",
"text": [
"[ WARN:0@9.977] global cap_v4l.cpp:913 open VIDEOIO(V4L2:/dev/video0): can't open camera by index\n",
"[ WARN:0@9.977] global obsensor_stream_channel_v4l2.cpp:82 xioctl ioctl: fd=-1, req=-2140645888\n",
"[ WARN:0@9.977] global obsensor_stream_channel_v4l2.cpp:138 queryUvcDeviceInfoList ioctl error return: 9\n",
"[ WARN:0@9.977] global obsensor_stream_channel_v4l2.cpp:82 xioctl ioctl: fd=-1, req=-2140645888\n",
"[ WARN:0@9.977] global obsensor_stream_channel_v4l2.cpp:138 queryUvcDeviceInfoList ioctl error return: 9\n",
"[ERROR:0@9.977] global obsensor_uvc_stream_channel.cpp:158 getStreamChannelGroup Camera index out of range\n",
"[ WARN:0@9.977] global cap_v4l.cpp:913 open VIDEOIO(V4L2:/dev/video1): can't open camera by index\n",
"[ WARN:0@9.977] global obsensor_stream_channel_v4l2.cpp:82 xioctl ioctl: fd=-1, req=-2140645888\n",
"[ WARN:0@9.977] global obsensor_stream_channel_v4l2.cpp:138 queryUvcDeviceInfoList ioctl error return: 9\n",
"[ WARN:0@9.977] global obsensor_stream_channel_v4l2.cpp:82 xioctl ioctl: fd=-1, req=-2140645888\n",
"[ WARN:0@9.977] global obsensor_stream_channel_v4l2.cpp:138 queryUvcDeviceInfoList ioctl error return: 9\n",
"[ERROR:0@9.977] global obsensor_uvc_stream_channel.cpp:158 getStreamChannelGroup Camera index out of range\n",
"[ WARN:0@9.977] global cap_v4l.cpp:913 open VIDEOIO(V4L2:/dev/video2): can't open camera by index\n",
"[ WARN:0@9.977] global obsensor_stream_channel_v4l2.cpp:82 xioctl ioctl: fd=-1, req=-2140645888\n",
"[ WARN:0@9.977] global obsensor_stream_channel_v4l2.cpp:138 queryUvcDeviceInfoList ioctl error return: 9\n",
"[ WARN:0@9.977] global obsensor_stream_channel_v4l2.cpp:82 xioctl ioctl: fd=-1, req=-2140645888\n",
"[ WARN:0@9.977] global obsensor_stream_channel_v4l2.cpp:138 queryUvcDeviceInfoList ioctl error return: 9\n",
"[ERROR:0@9.977] global obsensor_uvc_stream_channel.cpp:158 getStreamChannelGroup Camera index out of range\n",
"[ WARN:0@9.977] global cap_v4l.cpp:913 open VIDEOIO(V4L2:/dev/video3): can't open camera by index\n",
"[ WARN:0@9.977] global obsensor_stream_channel_v4l2.cpp:82 xioctl ioctl: fd=-1, req=-2140645888\n",
"[ WARN:0@9.977] global obsensor_stream_channel_v4l2.cpp:138 queryUvcDeviceInfoList ioctl error return: 9\n",
"[ WARN:0@9.977] global obsensor_stream_channel_v4l2.cpp:82 xioctl ioctl: fd=-1, req=-2140645888\n",
"[ WARN:0@9.977] global obsensor_stream_channel_v4l2.cpp:138 queryUvcDeviceInfoList ioctl error return: 9\n",
"[ERROR:0@9.977] global obsensor_uvc_stream_channel.cpp:158 getStreamChannelGroup Camera index out of range\n",
"[ WARN:0@9.977] global cap_v4l.cpp:913 open VIDEOIO(V4L2:/dev/video4): can't open camera by index\n",
"[ WARN:0@9.978] global obsensor_stream_channel_v4l2.cpp:82 xioctl ioctl: fd=-1, req=-2140645888\n",
"[ WARN:0@9.978] global obsensor_stream_channel_v4l2.cpp:138 queryUvcDeviceInfoList ioctl error return: 9\n",
"[ WARN:0@9.978] global obsensor_stream_channel_v4l2.cpp:82 xioctl ioctl: fd=-1, req=-2140645888\n",
"[ WARN:0@9.978] global obsensor_stream_channel_v4l2.cpp:138 queryUvcDeviceInfoList ioctl error return: 9\n",
"[ERROR:0@9.978] global obsensor_uvc_stream_channel.cpp:158 getStreamChannelGroup Camera index out of range\n",
"[ WARN:0@9.978] global cap_v4l.cpp:913 open VIDEOIO(V4L2:/dev/video5): can't open camera by index\n",
"[ WARN:0@9.978] global obsensor_stream_channel_v4l2.cpp:82 xioctl ioctl: fd=-1, req=-2140645888\n",
"[ WARN:0@9.978] global obsensor_stream_channel_v4l2.cpp:138 queryUvcDeviceInfoList ioctl error return: 9\n",
"[ WARN:0@9.978] global obsensor_stream_channel_v4l2.cpp:82 xioctl ioctl: fd=-1, req=-2140645888\n",
"[ WARN:0@9.978] global obsensor_stream_channel_v4l2.cpp:138 queryUvcDeviceInfoList ioctl error return: 9\n",
"[ERROR:0@9.978] global obsensor_uvc_stream_channel.cpp:158 getStreamChannelGroup Camera index out of range\n",
"[ WARN:0@9.978] global cap_v4l.cpp:913 open VIDEOIO(V4L2:/dev/video6): can't open camera by index\n",
"[ WARN:0@9.978] global obsensor_stream_channel_v4l2.cpp:82 xioctl ioctl: fd=-1, req=-2140645888\n",
"[ WARN:0@9.978] global obsensor_stream_channel_v4l2.cpp:138 queryUvcDeviceInfoList ioctl error return: 9\n",
"[ WARN:0@9.978] global obsensor_stream_channel_v4l2.cpp:82 xioctl ioctl: fd=-1, req=-2140645888\n",
"[ WARN:0@9.978] global obsensor_stream_channel_v4l2.cpp:138 queryUvcDeviceInfoList ioctl error return: 9\n",
"[ERROR:0@9.978] global obsensor_uvc_stream_channel.cpp:158 getStreamChannelGroup Camera index out of range\n",
"[ WARN:0@9.978] global cap_v4l.cpp:913 open VIDEOIO(V4L2:/dev/video7): can't open camera by index\n",
"[ WARN:0@9.978] global obsensor_stream_channel_v4l2.cpp:82 xioctl ioctl: fd=-1, req=-2140645888\n",
"[ WARN:0@9.978] global obsensor_stream_channel_v4l2.cpp:138 queryUvcDeviceInfoList ioctl error return: 9\n",
"[ WARN:0@9.978] global obsensor_stream_channel_v4l2.cpp:82 xioctl ioctl: fd=-1, req=-2140645888\n",
"[ WARN:0@9.978] global obsensor_stream_channel_v4l2.cpp:138 queryUvcDeviceInfoList ioctl error return: 9\n",
"[ERROR:0@9.978] global obsensor_uvc_stream_channel.cpp:158 getStreamChannelGroup Camera index out of range\n",
"[ WARN:0@9.978] global cap_v4l.cpp:913 open VIDEOIO(V4L2:/dev/video8): can't open camera by index\n",
"[ WARN:0@9.978] global obsensor_stream_channel_v4l2.cpp:82 xioctl ioctl: fd=-1, req=-2140645888\n",
"[ WARN:0@9.978] global obsensor_stream_channel_v4l2.cpp:138 queryUvcDeviceInfoList ioctl error return: 9\n",
"[ WARN:0@9.978] global obsensor_stream_channel_v4l2.cpp:82 xioctl ioctl: fd=-1, req=-2140645888\n",
"[ WARN:0@9.978] global obsensor_stream_channel_v4l2.cpp:138 queryUvcDeviceInfoList ioctl error return: 9\n",
"[ERROR:0@9.978] global obsensor_uvc_stream_channel.cpp:158 getStreamChannelGroup Camera index out of range\n",
"[ WARN:0@9.978] global cap_v4l.cpp:913 open VIDEOIO(V4L2:/dev/video9): can't open camera by index\n",
"[ WARN:0@9.978] global obsensor_stream_channel_v4l2.cpp:82 xioctl ioctl: fd=-1, req=-2140645888\n",
"[ WARN:0@9.978] global obsensor_stream_channel_v4l2.cpp:138 queryUvcDeviceInfoList ioctl error return: 9\n",
"[ WARN:0@9.978] global obsensor_stream_channel_v4l2.cpp:82 xioctl ioctl: fd=-1, req=-2140645888\n",
"[ WARN:0@9.978] global obsensor_stream_channel_v4l2.cpp:138 queryUvcDeviceInfoList ioctl error return: 9\n",
"[ERROR:0@9.978] global obsensor_uvc_stream_channel.cpp:158 getStreamChannelGroup Camera index out of range\n",
"[ WARN:0@9.978] global cap_v4l.cpp:913 open VIDEOIO(V4L2:/dev/video10): can't open camera by index\n",
"[ WARN:0@9.978] global obsensor_stream_channel_v4l2.cpp:82 xioctl ioctl: fd=-1, req=-2140645888\n",
"[ WARN:0@9.978] global obsensor_stream_channel_v4l2.cpp:138 queryUvcDeviceInfoList ioctl error return: 9\n",
"[ WARN:0@9.979] global obsensor_stream_channel_v4l2.cpp:82 xioctl ioctl: fd=-1, req=-2140645888\n",
"[ WARN:0@9.979] global obsensor_stream_channel_v4l2.cpp:138 queryUvcDeviceInfoList ioctl error return: 9\n",
"[ERROR:0@9.979] global obsensor_uvc_stream_channel.cpp:158 getStreamChannelGroup Camera index out of range\n"
]
}
],
"source": [
"# Scan for cameras\n",
"cameras = list_available_cameras()\n",
"print(f\"\\n📊 Summary: Found {len(cameras)} camera(s): {cameras}\")"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"## Step 3: Test Individual Camera"
]
},
{
"cell_type": "code",
"execution_count": 5,
"metadata": {},
"outputs": [
{
"name": "stdout",
"output_type": "stream",
"text": [
"🔧 Testing camera 0...\n",
" Trying Default backend...\n",
" ❌ Default backend failed to open\n",
" Trying GStreamer backend...\n",
" ❌ GStreamer backend failed to open\n",
" Trying V4L2 backend...\n",
" ❌ V4L2 backend failed to open\n",
" Trying FFmpeg backend...\n",
" ❌ FFmpeg backend failed to open\n",
"❌ Camera 0 not accessible with any backend\n"
]
},
{
"name": "stderr",
"output_type": "stream",
"text": [
"[ WARN:0@27.995] global cap_v4l.cpp:913 open VIDEOIO(V4L2:/dev/video0): can't open camera by index\n",
"[ WARN:0@27.995] global obsensor_stream_channel_v4l2.cpp:82 xioctl ioctl: fd=-1, req=-2140645888\n",
"[ WARN:0@27.995] global obsensor_stream_channel_v4l2.cpp:138 queryUvcDeviceInfoList ioctl error return: 9\n",
"[ WARN:0@27.995] global obsensor_stream_channel_v4l2.cpp:82 xioctl ioctl: fd=-1, req=-2140645888\n",
"[ WARN:0@27.995] global obsensor_stream_channel_v4l2.cpp:138 queryUvcDeviceInfoList ioctl error return: 9\n",
"[ERROR:0@27.995] global obsensor_uvc_stream_channel.cpp:158 getStreamChannelGroup Camera index out of range\n",
"[ WARN:0@27.996] global cap_v4l.cpp:913 open VIDEOIO(V4L2:/dev/video0): can't open camera by index\n",
"[ WARN:0@27.996] global cap.cpp:478 open VIDEOIO(V4L2): backend is generally available but can't be used to capture by index\n",
"[ WARN:0@27.996] global cap.cpp:478 open VIDEOIO(FFMPEG): backend is generally available but can't be used to capture by index\n"
]
}
],
"source": [
"# Test a specific camera (change camera_id as needed)\n",
"camera_id = 0 # Change this to test different cameras\n",
"\n",
"print(f\"🔧 Testing camera {camera_id}...\")\n",
"\n",
"# Try different backends for GigE cameras\n",
"backends_to_try = [\n",
" (cv2.CAP_ANY, \"Default\"),\n",
" (cv2.CAP_GSTREAMER, \"GStreamer\"),\n",
" (cv2.CAP_V4L2, \"V4L2\"),\n",
" (cv2.CAP_FFMPEG, \"FFmpeg\")\n",
"]\n",
"\n",
"successful_backend = None\n",
"cap = None\n",
"\n",
"for backend, name in backends_to_try:\n",
" print(f\" Trying {name} backend...\")\n",
" cap = cv2.VideoCapture(camera_id, backend)\n",
" if cap.isOpened():\n",
" ret, frame = cap.read()\n",
" if ret:\n",
" print(f\" ✅ {name} backend works!\")\n",
" successful_backend = (backend, name)\n",
" break\n",
" else:\n",
" print(f\" ❌ {name} backend opened but can't read frames\")\n",
" else:\n",
" print(f\" ❌ {name} backend failed to open\")\n",
" cap.release()\n",
"\n",
"if successful_backend:\n",
" backend, backend_name = successful_backend\n",
" cap = cv2.VideoCapture(camera_id, backend)\n",
" \n",
" # Get camera properties\n",
" width = int(cap.get(cv2.CAP_PROP_FRAME_WIDTH))\n",
" height = int(cap.get(cv2.CAP_PROP_FRAME_HEIGHT))\n",
" fps = cap.get(cv2.CAP_PROP_FPS)\n",
" \n",
" print(f\"\\n📷 Camera {camera_id} Properties ({backend_name}):\")\n",
" print(f\" Resolution: {width}x{height}\")\n",
" print(f\" FPS: {fps}\")\n",
" \n",
" # Capture a test frame\n",
" ret, frame = cap.read()\n",
" if ret:\n",
" print(f\" Frame shape: {frame.shape}\")\n",
" print(f\" Frame dtype: {frame.dtype}\")\n",
" \n",
" # Display the frame\n",
" display_image(frame, f\"Camera {camera_id} Test Frame\")\n",
" \n",
" # Save test image\n",
" save_image_to_storage(frame, f\"camera_{camera_id}_test\")\n",
" else:\n",
" print(\" ❌ Failed to capture frame\")\n",
" \n",
" cap.release()\n",
"else:\n",
" print(f\"❌ Camera {camera_id} not accessible with any backend\")"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"## Step 4: Test Video Recording"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"# Test video recording\n",
"def test_video_recording(camera_id, duration_seconds=5, fps=30):\n",
" \"\"\"Test video recording from camera\"\"\"\n",
" print(f\"🎥 Testing video recording from camera {camera_id} for {duration_seconds} seconds...\")\n",
" \n",
" # Open camera\n",
" cap = cv2.VideoCapture(camera_id)\n",
" if not cap.isOpened():\n",
" print(f\"❌ Cannot open camera {camera_id}\")\n",
" return None\n",
" \n",
" # Get camera properties\n",
" width = int(cap.get(cv2.CAP_PROP_FRAME_WIDTH))\n",
" height = int(cap.get(cv2.CAP_PROP_FRAME_HEIGHT))\n",
" \n",
" # Create video writer\n",
" timestamp = datetime.now().strftime(\"%Y%m%d_%H%M%S\")\n",
" video_filename = f\"/storage/test_videos/camera_{camera_id}_test_{timestamp}.mp4\"\n",
" \n",
" fourcc = cv2.VideoWriter_fourcc(*'mp4v')\n",
" out = cv2.VideoWriter(video_filename, fourcc, fps, (width, height))\n",
" \n",
" if not out.isOpened():\n",
" print(\"❌ Cannot create video writer\")\n",
" cap.release()\n",
" return None\n",
" \n",
" # Record video\n",
" frames_to_capture = duration_seconds * fps\n",
" frames_captured = 0\n",
" \n",
" print(f\"Recording {frames_to_capture} frames...\")\n",
" \n",
" with tqdm(total=frames_to_capture, desc=\"Recording\") as pbar:\n",
" start_time = time.time()\n",
" \n",
" while frames_captured < frames_to_capture:\n",
" ret, frame = cap.read()\n",
" if ret:\n",
" out.write(frame)\n",
" frames_captured += 1\n",
" pbar.update(1)\n",
" \n",
" # Display first frame\n",
" if frames_captured == 1:\n",
" display_image(frame, f\"First frame from camera {camera_id}\")\n",
" else:\n",
" print(f\"❌ Failed to read frame {frames_captured}\")\n",
" break\n",
" \n",
" # Cleanup\n",
" cap.release()\n",
" out.release()\n",
" \n",
" elapsed_time = time.time() - start_time\n",
" actual_fps = frames_captured / elapsed_time\n",
" \n",
" print(f\"✅ Video saved: {video_filename}\")\n",
" print(f\"📊 Captured {frames_captured} frames in {elapsed_time:.2f}s\")\n",
" print(f\"📊 Actual FPS: {actual_fps:.2f}\")\n",
" \n",
" return video_filename\n",
"\n",
"# Test recording (change camera_id as needed)\n",
"if cameras: # Only test if cameras were found\n",
" test_camera = cameras[0] # Use first available camera\n",
" video_file = test_video_recording(test_camera, duration_seconds=3)\n",
"else:\n",
" print(\"⚠️ No cameras available for video test\")"
]
}
],
"metadata": {
"kernelspec": {
"display_name": "USDA-vision-cameras",
"language": "python",
"name": "python3"
},
"language_info": {
"codemirror_mode": {
"name": "ipython",
"version": 3
},
"file_extension": ".py",
"mimetype": "text/x-python",
"name": "python",
"nbconvert_exporter": "python",
"pygments_lexer": "ipython3",
"version": "3.11.2"
}
},
"nbformat": 4,
"nbformat_minor": 4
}

View File

@@ -0,0 +1,426 @@
{
"cells": [
{
"cell_type": "code",
"execution_count": 25,
"id": "ba958c88",
"metadata": {},
"outputs": [],
"source": [
"# coding=utf-8\n",
"\"\"\"\n",
"Test script to help find optimal exposure settings for your GigE camera.\n",
"This script captures a single test image with different exposure settings.\n",
"\"\"\"\n",
"import sys\n",
"\n",
"sys.path.append(\"./python demo\")\n",
"import os\n",
"import mvsdk\n",
"import numpy as np\n",
"import cv2\n",
"import platform\n",
"from datetime import datetime\n",
"\n",
"# Add the python demo directory to path\n"
]
},
{
"cell_type": "code",
"execution_count": 26,
"id": "23f1dc49",
"metadata": {},
"outputs": [],
"source": [
"def test_exposure_settings():\n",
" \"\"\"\n",
" Test different exposure settings to find optimal values\n",
" \"\"\"\n",
" # Initialize SDK\n",
" try:\n",
" mvsdk.CameraSdkInit(1)\n",
" print(\"SDK initialized successfully\")\n",
" except Exception as e:\n",
" print(f\"SDK initialization failed: {e}\")\n",
" return False\n",
"\n",
" # Enumerate cameras\n",
" DevList = mvsdk.CameraEnumerateDevice()\n",
" nDev = len(DevList)\n",
"\n",
" if nDev < 1:\n",
" print(\"No camera was found!\")\n",
" return False\n",
"\n",
" print(f\"Found {nDev} camera(s):\")\n",
" for i, DevInfo in enumerate(DevList):\n",
" print(f\" {i}: {DevInfo.GetFriendlyName()} ({DevInfo.GetPortType()})\")\n",
"\n",
" # Use first camera\n",
" DevInfo = DevList[0]\n",
" print(f\"\\nSelected camera: {DevInfo.GetFriendlyName()}\")\n",
"\n",
" # Initialize camera\n",
" try:\n",
" hCamera = mvsdk.CameraInit(DevInfo, -1, -1)\n",
" print(\"Camera initialized successfully\")\n",
" except mvsdk.CameraException as e:\n",
" print(f\"CameraInit Failed({e.error_code}): {e.message}\")\n",
" return False\n",
"\n",
" try:\n",
" # Get camera capabilities\n",
" cap = mvsdk.CameraGetCapability(hCamera)\n",
" monoCamera = cap.sIspCapacity.bMonoSensor != 0\n",
" print(f\"Camera type: {'Monochrome' if monoCamera else 'Color'}\")\n",
"\n",
" # Get camera ranges\n",
" try:\n",
" exp_min, exp_max, exp_step = mvsdk.CameraGetExposureTimeRange(hCamera)\n",
" print(f\"Exposure time range: {exp_min:.1f} - {exp_max:.1f} μs\")\n",
"\n",
" gain_min, gain_max, gain_step = mvsdk.CameraGetAnalogGainXRange(hCamera)\n",
" print(f\"Analog gain range: {gain_min:.2f} - {gain_max:.2f}x\")\n",
"\n",
" print(\"whatever this is: \", mvsdk.CameraGetAnalogGainXRange(hCamera))\n",
" except Exception as e:\n",
" print(f\"Could not get camera ranges: {e}\")\n",
" exp_min, exp_max = 100, 100000\n",
" gain_min, gain_max = 1.0, 4.0\n",
"\n",
" # Set output format\n",
" if monoCamera:\n",
" mvsdk.CameraSetIspOutFormat(hCamera, mvsdk.CAMERA_MEDIA_TYPE_MONO8)\n",
" else:\n",
" mvsdk.CameraSetIspOutFormat(hCamera, mvsdk.CAMERA_MEDIA_TYPE_BGR8)\n",
"\n",
" # Set camera to continuous capture mode\n",
" mvsdk.CameraSetTriggerMode(hCamera, 0)\n",
" mvsdk.CameraSetAeState(hCamera, 0) # Disable auto exposure\n",
"\n",
" # Start camera\n",
" mvsdk.CameraPlay(hCamera)\n",
"\n",
" # Allocate frame buffer\n",
" FrameBufferSize = cap.sResolutionRange.iWidthMax * cap.sResolutionRange.iHeightMax * (1 if monoCamera else 3)\n",
" pFrameBuffer = mvsdk.CameraAlignMalloc(FrameBufferSize, 16)\n",
"\n",
" # Create test directory\n",
" if not os.path.exists(\"exposure_tests\"):\n",
" os.makedirs(\"exposure_tests\")\n",
"\n",
" print(\"\\nTesting different exposure settings...\")\n",
" print(\"=\" * 50)\n",
"\n",
" # Test different exposure times (in microseconds)\n",
" exposure_times = [100, 200, 500, 1000, 2000, 5000, 10000, 20000] # 0.5ms to 20ms\n",
" analog_gains = [2.5, 5.0, 10.0, 16.0] # Start with 1x gain\n",
"\n",
" test_count = 0\n",
" for exp_time in exposure_times:\n",
" for gain in analog_gains:\n",
" # Clamp values to valid ranges\n",
" exp_time = max(exp_min, min(exp_max, exp_time))\n",
" gain = max(gain_min, min(gain_max, gain))\n",
"\n",
" print(f\"\\nTest {test_count + 1}: Exposure={exp_time/1000:.1f}ms, Gain={gain:.1f}x\")\n",
"\n",
" # Set camera parameters\n",
" mvsdk.CameraSetExposureTime(hCamera, exp_time)\n",
" try:\n",
" mvsdk.CameraSetAnalogGainX(hCamera, gain)\n",
" except:\n",
" pass # Some cameras might not support this\n",
"\n",
" # Wait a moment for settings to take effect\n",
" import time\n",
"\n",
" time.sleep(0.1)\n",
"\n",
" # Capture image\n",
" try:\n",
" pRawData, FrameHead = mvsdk.CameraGetImageBuffer(hCamera, 2000)\n",
" mvsdk.CameraImageProcess(hCamera, pRawData, pFrameBuffer, FrameHead)\n",
" mvsdk.CameraReleaseImageBuffer(hCamera, pRawData)\n",
"\n",
" # Handle Windows image flip\n",
" if platform.system() == \"Windows\":\n",
" mvsdk.CameraFlipFrameBuffer(pFrameBuffer, FrameHead, 1)\n",
"\n",
" # Convert to numpy array\n",
" frame_data = (mvsdk.c_ubyte * FrameHead.uBytes).from_address(pFrameBuffer)\n",
" frame = np.frombuffer(frame_data, dtype=np.uint8)\n",
"\n",
" if FrameHead.uiMediaType == mvsdk.CAMERA_MEDIA_TYPE_MONO8:\n",
" frame = frame.reshape((FrameHead.iHeight, FrameHead.iWidth))\n",
" else:\n",
" frame = frame.reshape((FrameHead.iHeight, FrameHead.iWidth, 3))\n",
"\n",
" # Calculate image statistics\n",
" mean_brightness = np.mean(frame)\n",
" max_brightness = np.max(frame)\n",
"\n",
" # Save image\n",
" filename = f\"exposure_tests/test_{test_count+1:02d}_exp{exp_time/1000:.1f}ms_gain{gain:.1f}x.jpg\"\n",
" cv2.imwrite(filename, frame)\n",
"\n",
" # Provide feedback\n",
" status = \"\"\n",
" if mean_brightness < 50:\n",
" status = \"TOO DARK\"\n",
" elif mean_brightness > 200:\n",
" status = \"TOO BRIGHT\"\n",
" elif max_brightness >= 255:\n",
" status = \"OVEREXPOSED\"\n",
" else:\n",
" status = \"GOOD\"\n",
"\n",
" print(f\" → Saved: {filename}\")\n",
" print(f\" → Brightness: mean={mean_brightness:.1f}, max={max_brightness:.1f} [{status}]\")\n",
"\n",
" test_count += 1\n",
"\n",
" except mvsdk.CameraException as e:\n",
" print(f\" → Failed to capture: {e.message}\")\n",
"\n",
" print(f\"\\nCompleted {test_count} test captures!\")\n",
" print(\"Check the 'exposure_tests' directory to see the results.\")\n",
" print(\"\\nRecommendations:\")\n",
" print(\"- Look for images marked as 'GOOD' - these have optimal exposure\")\n",
" print(\"- If all images are 'TOO BRIGHT', try lower exposure times or gains\")\n",
" print(\"- If all images are 'TOO DARK', try higher exposure times or gains\")\n",
" print(\"- Avoid 'OVEREXPOSED' images as they have clipped highlights\")\n",
"\n",
" # Cleanup\n",
" mvsdk.CameraAlignFree(pFrameBuffer)\n",
"\n",
" finally:\n",
" # Close camera\n",
" mvsdk.CameraUnInit(hCamera)\n",
" print(\"\\nCamera closed\")\n",
"\n",
" return True"
]
},
{
"cell_type": "code",
"execution_count": 27,
"id": "2891b5bf",
"metadata": {},
"outputs": [
{
"name": "stdout",
"output_type": "stream",
"text": [
"GigE Camera Exposure Test Script\n",
"========================================\n",
"This script will test different exposure settings and save sample images.\n",
"Use this to find the optimal settings for your lighting conditions.\n",
"\n",
"SDK initialized successfully\n",
"Found 2 camera(s):\n",
" 0: Blower-Yield-Cam (NET-100M-192.168.1.204)\n",
" 1: Cracker-Cam (NET-1000M-192.168.1.246)\n",
"\n",
"Selected camera: Blower-Yield-Cam\n",
"Camera initialized successfully\n",
"Camera type: Color\n",
"Exposure time range: 8.0 - 1048568.0 μs\n",
"Analog gain range: 2.50 - 16.50x\n",
"whatever this is: (2.5, 16.5, 0.5)\n",
"\n",
"Testing different exposure settings...\n",
"==================================================\n",
"\n",
"Test 1: Exposure=0.1ms, Gain=2.5x\n",
" → Saved: exposure_tests/test_01_exp0.1ms_gain2.5x.jpg\n",
" → Brightness: mean=94.1, max=255.0 [OVEREXPOSED]\n",
"\n",
"Test 2: Exposure=0.1ms, Gain=5.0x\n",
" → Saved: exposure_tests/test_02_exp0.1ms_gain5.0x.jpg\n",
" → Brightness: mean=13.7, max=173.0 [TOO DARK]\n",
"\n",
"Test 3: Exposure=0.1ms, Gain=10.0x\n",
" → Saved: exposure_tests/test_03_exp0.1ms_gain10.0x.jpg\n",
" → Brightness: mean=14.1, max=255.0 [TOO DARK]\n",
"\n",
"Test 4: Exposure=0.1ms, Gain=16.0x\n",
" → Saved: exposure_tests/test_04_exp0.1ms_gain16.0x.jpg\n",
" → Brightness: mean=18.2, max=255.0 [TOO DARK]\n",
"\n",
"Test 5: Exposure=0.2ms, Gain=2.5x\n",
" → Saved: exposure_tests/test_05_exp0.2ms_gain2.5x.jpg\n",
" → Brightness: mean=22.1, max=255.0 [TOO DARK]\n",
"\n",
"Test 6: Exposure=0.2ms, Gain=5.0x\n",
" → Saved: exposure_tests/test_06_exp0.2ms_gain5.0x.jpg\n",
" → Brightness: mean=19.5, max=255.0 [TOO DARK]\n",
"\n",
"Test 7: Exposure=0.2ms, Gain=10.0x\n",
" → Saved: exposure_tests/test_07_exp0.2ms_gain10.0x.jpg\n",
" → Brightness: mean=25.3, max=255.0 [TOO DARK]\n",
"\n",
"Test 8: Exposure=0.2ms, Gain=16.0x\n",
" → Saved: exposure_tests/test_08_exp0.2ms_gain16.0x.jpg\n",
" → Brightness: mean=36.6, max=255.0 [TOO DARK]\n",
"\n",
"Test 9: Exposure=0.5ms, Gain=2.5x\n",
" → Saved: exposure_tests/test_09_exp0.5ms_gain2.5x.jpg\n",
" → Brightness: mean=55.8, max=255.0 [OVEREXPOSED]\n",
"\n",
"Test 10: Exposure=0.5ms, Gain=5.0x\n",
" → Saved: exposure_tests/test_10_exp0.5ms_gain5.0x.jpg\n",
" → Brightness: mean=38.5, max=255.0 [TOO DARK]\n",
"\n",
"Test 11: Exposure=0.5ms, Gain=10.0x\n",
" → Saved: exposure_tests/test_11_exp0.5ms_gain10.0x.jpg\n",
" → Brightness: mean=60.2, max=255.0 [OVEREXPOSED]\n",
"\n",
"Test 12: Exposure=0.5ms, Gain=16.0x\n",
" → Saved: exposure_tests/test_12_exp0.5ms_gain16.0x.jpg\n",
" → Brightness: mean=99.3, max=255.0 [OVEREXPOSED]\n",
"\n",
"Test 13: Exposure=1.0ms, Gain=2.5x\n",
" → Saved: exposure_tests/test_13_exp1.0ms_gain2.5x.jpg\n",
" → Brightness: mean=121.1, max=255.0 [OVEREXPOSED]\n",
"\n",
"Test 14: Exposure=1.0ms, Gain=5.0x\n",
" → Saved: exposure_tests/test_14_exp1.0ms_gain5.0x.jpg\n",
" → Brightness: mean=68.8, max=255.0 [OVEREXPOSED]\n",
"\n",
"Test 15: Exposure=1.0ms, Gain=10.0x\n",
" → Saved: exposure_tests/test_15_exp1.0ms_gain10.0x.jpg\n",
" → Brightness: mean=109.6, max=255.0 [OVEREXPOSED]\n",
"\n",
"Test 16: Exposure=1.0ms, Gain=16.0x\n",
" → Saved: exposure_tests/test_16_exp1.0ms_gain16.0x.jpg\n",
" → Brightness: mean=148.7, max=255.0 [OVEREXPOSED]\n",
"\n",
"Test 17: Exposure=2.0ms, Gain=2.5x\n",
" → Saved: exposure_tests/test_17_exp2.0ms_gain2.5x.jpg\n",
" → Brightness: mean=171.9, max=255.0 [OVEREXPOSED]\n",
"\n",
"Test 18: Exposure=2.0ms, Gain=5.0x\n",
" → Saved: exposure_tests/test_18_exp2.0ms_gain5.0x.jpg\n",
" → Brightness: mean=117.9, max=255.0 [OVEREXPOSED]\n",
"\n",
"Test 19: Exposure=2.0ms, Gain=10.0x\n",
" → Saved: exposure_tests/test_19_exp2.0ms_gain10.0x.jpg\n",
" → Brightness: mean=159.0, max=255.0 [OVEREXPOSED]\n",
"\n",
"Test 20: Exposure=2.0ms, Gain=16.0x\n",
" → Saved: exposure_tests/test_20_exp2.0ms_gain16.0x.jpg\n",
" → Brightness: mean=195.7, max=255.0 [OVEREXPOSED]\n",
"\n",
"Test 21: Exposure=5.0ms, Gain=2.5x\n",
" → Saved: exposure_tests/test_21_exp5.0ms_gain2.5x.jpg\n",
" → Brightness: mean=214.6, max=255.0 [TOO BRIGHT]\n",
"\n",
"Test 22: Exposure=5.0ms, Gain=5.0x\n",
" → Saved: exposure_tests/test_22_exp5.0ms_gain5.0x.jpg\n",
" → Brightness: mean=180.2, max=255.0 [OVEREXPOSED]\n",
"\n",
"Test 23: Exposure=5.0ms, Gain=10.0x\n",
" → Saved: exposure_tests/test_23_exp5.0ms_gain10.0x.jpg\n",
" → Brightness: mean=214.6, max=255.0 [TOO BRIGHT]\n",
"\n",
"Test 24: Exposure=5.0ms, Gain=16.0x\n",
" → Saved: exposure_tests/test_24_exp5.0ms_gain16.0x.jpg\n",
" → Brightness: mean=239.6, max=255.0 [TOO BRIGHT]\n",
"\n",
"Test 25: Exposure=10.0ms, Gain=2.5x\n",
" → Saved: exposure_tests/test_25_exp10.0ms_gain2.5x.jpg\n",
" → Brightness: mean=247.5, max=255.0 [TOO BRIGHT]\n",
"\n",
"Test 26: Exposure=10.0ms, Gain=5.0x\n",
" → Saved: exposure_tests/test_26_exp10.0ms_gain5.0x.jpg\n",
" → Brightness: mean=252.4, max=255.0 [TOO BRIGHT]\n",
"\n",
"Test 27: Exposure=10.0ms, Gain=10.0x\n",
" → Saved: exposure_tests/test_27_exp10.0ms_gain10.0x.jpg\n",
" → Brightness: mean=218.9, max=255.0 [TOO BRIGHT]\n",
"\n",
"Test 28: Exposure=10.0ms, Gain=16.0x\n",
" → Saved: exposure_tests/test_28_exp10.0ms_gain16.0x.jpg\n",
" → Brightness: mean=250.8, max=255.0 [TOO BRIGHT]\n",
"\n",
"Test 29: Exposure=20.0ms, Gain=2.5x\n",
" → Saved: exposure_tests/test_29_exp20.0ms_gain2.5x.jpg\n",
" → Brightness: mean=252.4, max=255.0 [TOO BRIGHT]\n",
"\n",
"Test 30: Exposure=20.0ms, Gain=5.0x\n",
" → Saved: exposure_tests/test_30_exp20.0ms_gain5.0x.jpg\n",
" → Brightness: mean=244.4, max=255.0 [TOO BRIGHT]\n",
"\n",
"Test 31: Exposure=20.0ms, Gain=10.0x\n",
" → Saved: exposure_tests/test_31_exp20.0ms_gain10.0x.jpg\n",
" → Brightness: mean=251.5, max=255.0 [TOO BRIGHT]\n",
"\n",
"Test 32: Exposure=20.0ms, Gain=16.0x\n",
" → Saved: exposure_tests/test_32_exp20.0ms_gain16.0x.jpg\n",
" → Brightness: mean=253.4, max=255.0 [TOO BRIGHT]\n",
"\n",
"Completed 32 test captures!\n",
"Check the 'exposure_tests' directory to see the results.\n",
"\n",
"Recommendations:\n",
"- Look for images marked as 'GOOD' - these have optimal exposure\n",
"- If all images are 'TOO BRIGHT', try lower exposure times or gains\n",
"- If all images are 'TOO DARK', try higher exposure times or gains\n",
"- Avoid 'OVEREXPOSED' images as they have clipped highlights\n",
"\n",
"Camera closed\n",
"\n",
"Testing completed successfully!\n"
]
}
],
"source": [
"\n",
"\n",
"if __name__ == \"__main__\":\n",
" print(\"GigE Camera Exposure Test Script\")\n",
" print(\"=\" * 40)\n",
" print(\"This script will test different exposure settings and save sample images.\")\n",
" print(\"Use this to find the optimal settings for your lighting conditions.\")\n",
" print()\n",
"\n",
" success = test_exposure_settings()\n",
"\n",
" if success:\n",
" print(\"\\nTesting completed successfully!\")\n",
" else:\n",
" print(\"\\nTesting failed!\")\n"
]
},
{
"cell_type": "code",
"execution_count": null,
"id": "ead8d889",
"metadata": {},
"outputs": [],
"source": []
}
],
"metadata": {
"kernelspec": {
"display_name": "cc_pecan",
"language": "python",
"name": "python3"
},
"language_info": {
"codemirror_mode": {
"name": "ipython",
"version": 3
},
"file_extension": ".py",
"mimetype": "text/x-python",
"name": "python",
"nbconvert_exporter": "python",
"pygments_lexer": "ipython3",
"version": "3.13.5"
}
},
"nbformat": 4,
"nbformat_minor": 5
}

View File

@@ -0,0 +1,385 @@
{
"cells": [
{
"cell_type": "markdown",
"metadata": {},
"source": [
"# Advanced GigE Camera Configuration\n",
"\n",
"This notebook provides advanced testing and configuration for GigE cameras.\n",
"\n",
"## Features:\n",
"- Network interface detection\n",
"- GigE camera discovery\n",
"- Camera parameter configuration\n",
"- Performance testing\n",
"- Dual camera synchronization testing"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"import cv2\n",
"import numpy as np\n",
"import matplotlib.pyplot as plt\n",
"import subprocess\n",
"import socket\n",
"import threading\n",
"import time\n",
"from datetime import datetime\n",
"import os\n",
"from pathlib import Path\n",
"import json\n",
"\n",
"print(\"✅ Imports successful!\")"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"## Network Interface Detection"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"def get_network_interfaces():\n",
" \"\"\"Get network interface information\"\"\"\n",
" try:\n",
" result = subprocess.run(['ip', 'addr', 'show'], capture_output=True, text=True)\n",
" print(\"🌐 Network Interfaces:\")\n",
" print(result.stdout)\n",
" \n",
" # Also check for GigE specific interfaces\n",
" result2 = subprocess.run(['ifconfig'], capture_output=True, text=True)\n",
" if result2.returncode == 0:\n",
" print(\"\\n📡 Interface Configuration:\")\n",
" print(result2.stdout)\n",
" except Exception as e:\n",
" print(f\"❌ Error getting network info: {e}\")\n",
"\n",
"get_network_interfaces()"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"## GigE Camera Discovery"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"def discover_gige_cameras():\n",
" \"\"\"Attempt to discover GigE cameras on the network\"\"\"\n",
" print(\"🔍 Discovering GigE cameras...\")\n",
" \n",
" # Try different methods to find GigE cameras\n",
" methods = [\n",
" \"OpenCV with different backends\",\n",
" \"Network scanning\",\n",
" \"GStreamer pipeline testing\"\n",
" ]\n",
" \n",
" print(\"\\n1. Testing OpenCV backends:\")\n",
" backends = [\n",
" (cv2.CAP_GSTREAMER, \"GStreamer\"),\n",
" (cv2.CAP_V4L2, \"V4L2\"),\n",
" (cv2.CAP_FFMPEG, \"FFmpeg\"),\n",
" (cv2.CAP_ANY, \"Default\")\n",
" ]\n",
" \n",
" for backend_id, backend_name in backends:\n",
" print(f\" Testing {backend_name}...\")\n",
" for cam_id in range(5):\n",
" try:\n",
" cap = cv2.VideoCapture(cam_id, backend_id)\n",
" if cap.isOpened():\n",
" ret, frame = cap.read()\n",
" if ret:\n",
" print(f\" ✅ Camera {cam_id} accessible via {backend_name}\")\n",
" print(f\" Resolution: {frame.shape[1]}x{frame.shape[0]}\")\n",
" cap.release()\n",
" except Exception as e:\n",
" pass\n",
" \n",
" print(\"\\n2. Testing GStreamer pipelines:\")\n",
" # Common GigE camera GStreamer pipelines\n",
" gstreamer_pipelines = [\n",
" \"v4l2src device=/dev/video0 ! videoconvert ! appsink\",\n",
" \"v4l2src device=/dev/video1 ! videoconvert ! appsink\",\n",
" \"tcambin ! videoconvert ! appsink\", # For TIS cameras\n",
" \"aravis ! videoconvert ! appsink\", # For Aravis-supported cameras\n",
" ]\n",
" \n",
" for pipeline in gstreamer_pipelines:\n",
" try:\n",
" print(f\" Testing: {pipeline}\")\n",
" cap = cv2.VideoCapture(pipeline, cv2.CAP_GSTREAMER)\n",
" if cap.isOpened():\n",
" ret, frame = cap.read()\n",
" if ret:\n",
" print(f\" ✅ Pipeline works! Frame shape: {frame.shape}\")\n",
" else:\n",
" print(f\" ⚠️ Pipeline opened but no frames\")\n",
" else:\n",
" print(f\" ❌ Pipeline failed\")\n",
" cap.release()\n",
" except Exception as e:\n",
" print(f\" ❌ Error: {e}\")\n",
"\n",
"discover_gige_cameras()"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"## Camera Parameter Configuration"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"def configure_camera_parameters(camera_id, backend=cv2.CAP_ANY):\n",
" \"\"\"Configure and test camera parameters\"\"\"\n",
" print(f\"⚙️ Configuring camera {camera_id}...\")\n",
" \n",
" cap = cv2.VideoCapture(camera_id, backend)\n",
" if not cap.isOpened():\n",
" print(f\"❌ Cannot open camera {camera_id}\")\n",
" return None\n",
" \n",
" # Get current parameters\n",
" current_params = {\n",
" 'width': cap.get(cv2.CAP_PROP_FRAME_WIDTH),\n",
" 'height': cap.get(cv2.CAP_PROP_FRAME_HEIGHT),\n",
" 'fps': cap.get(cv2.CAP_PROP_FPS),\n",
" 'brightness': cap.get(cv2.CAP_PROP_BRIGHTNESS),\n",
" 'contrast': cap.get(cv2.CAP_PROP_CONTRAST),\n",
" 'saturation': cap.get(cv2.CAP_PROP_SATURATION),\n",
" 'hue': cap.get(cv2.CAP_PROP_HUE),\n",
" 'gain': cap.get(cv2.CAP_PROP_GAIN),\n",
" 'exposure': cap.get(cv2.CAP_PROP_EXPOSURE),\n",
" 'auto_exposure': cap.get(cv2.CAP_PROP_AUTO_EXPOSURE),\n",
" 'white_balance': cap.get(cv2.CAP_PROP_WHITE_BALANCE_BLUE_U),\n",
" }\n",
" \n",
" print(\"📊 Current Camera Parameters:\")\n",
" for param, value in current_params.items():\n",
" print(f\" {param}: {value}\")\n",
" \n",
" # Test setting some parameters\n",
" print(\"\\n🔧 Testing parameter changes:\")\n",
" \n",
" # Try to set resolution (common GigE resolutions)\n",
" test_resolutions = [(1920, 1080), (1280, 720), (640, 480)]\n",
" for width, height in test_resolutions:\n",
" if cap.set(cv2.CAP_PROP_FRAME_WIDTH, width) and cap.set(cv2.CAP_PROP_FRAME_HEIGHT, height):\n",
" actual_width = cap.get(cv2.CAP_PROP_FRAME_WIDTH)\n",
" actual_height = cap.get(cv2.CAP_PROP_FRAME_HEIGHT)\n",
" print(f\" Resolution {width}x{height}: Set to {actual_width}x{actual_height}\")\n",
" break\n",
" \n",
" # Test FPS settings\n",
" for fps in [30, 60, 120]:\n",
" if cap.set(cv2.CAP_PROP_FPS, fps):\n",
" actual_fps = cap.get(cv2.CAP_PROP_FPS)\n",
" print(f\" FPS {fps}: Set to {actual_fps}\")\n",
" break\n",
" \n",
" # Capture test frame with new settings\n",
" ret, frame = cap.read()\n",
" if ret:\n",
" print(f\"\\n✅ Test frame captured: {frame.shape}\")\n",
" \n",
" # Display frame\n",
" plt.figure(figsize=(10, 6))\n",
" if len(frame.shape) == 3:\n",
" plt.imshow(cv2.cvtColor(frame, cv2.COLOR_BGR2RGB))\n",
" else:\n",
" plt.imshow(frame, cmap='gray')\n",
" plt.title(f\"Camera {camera_id} - Configured\")\n",
" plt.axis('off')\n",
" plt.show()\n",
" \n",
" # Save configuration and test image\n",
" timestamp = datetime.now().strftime(\"%Y%m%d_%H%M%S\")\n",
" \n",
" # Save image\n",
" img_path = f\"/storage/camera{camera_id}/configured_test_{timestamp}.jpg\"\n",
" cv2.imwrite(img_path, frame)\n",
" print(f\"💾 Test image saved: {img_path}\")\n",
" \n",
" # Save configuration\n",
" config_path = f\"/storage/camera{camera_id}/config_{timestamp}.json\"\n",
" with open(config_path, 'w') as f:\n",
" json.dump(current_params, f, indent=2)\n",
" print(f\"💾 Configuration saved: {config_path}\")\n",
" \n",
" cap.release()\n",
" return current_params\n",
"\n",
"# Test configuration (change camera_id as needed)\n",
"camera_to_configure = 0\n",
"config = configure_camera_parameters(camera_to_configure)"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"## Dual Camera Testing"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"def test_dual_cameras(camera1_id=0, camera2_id=1, duration=5):\n",
" \"\"\"Test simultaneous capture from two cameras\"\"\"\n",
" print(f\"📷📷 Testing dual camera capture (cameras {camera1_id} and {camera2_id})...\")\n",
" \n",
" # Open both cameras\n",
" cap1 = cv2.VideoCapture(camera1_id)\n",
" cap2 = cv2.VideoCapture(camera2_id)\n",
" \n",
" if not cap1.isOpened():\n",
" print(f\"❌ Cannot open camera {camera1_id}\")\n",
" return\n",
" \n",
" if not cap2.isOpened():\n",
" print(f\"❌ Cannot open camera {camera2_id}\")\n",
" cap1.release()\n",
" return\n",
" \n",
" print(\"✅ Both cameras opened successfully\")\n",
" \n",
" # Capture test frames\n",
" ret1, frame1 = cap1.read()\n",
" ret2, frame2 = cap2.read()\n",
" \n",
" if ret1 and ret2:\n",
" print(f\"📊 Camera {camera1_id}: {frame1.shape}\")\n",
" print(f\"📊 Camera {camera2_id}: {frame2.shape}\")\n",
" \n",
" # Display both frames side by side\n",
" fig, (ax1, ax2) = plt.subplots(1, 2, figsize=(15, 6))\n",
" \n",
" if len(frame1.shape) == 3:\n",
" ax1.imshow(cv2.cvtColor(frame1, cv2.COLOR_BGR2RGB))\n",
" else:\n",
" ax1.imshow(frame1, cmap='gray')\n",
" ax1.set_title(f\"Camera {camera1_id}\")\n",
" ax1.axis('off')\n",
" \n",
" if len(frame2.shape) == 3:\n",
" ax2.imshow(cv2.cvtColor(frame2, cv2.COLOR_BGR2RGB))\n",
" else:\n",
" ax2.imshow(frame2, cmap='gray')\n",
" ax2.set_title(f\"Camera {camera2_id}\")\n",
" ax2.axis('off')\n",
" \n",
" plt.tight_layout()\n",
" plt.show()\n",
" \n",
" # Save test images\n",
" timestamp = datetime.now().strftime(\"%Y%m%d_%H%M%S\")\n",
" cv2.imwrite(f\"/storage/camera1/dual_test_{timestamp}.jpg\", frame1)\n",
" cv2.imwrite(f\"/storage/camera2/dual_test_{timestamp}.jpg\", frame2)\n",
" print(f\"💾 Dual camera test images saved with timestamp {timestamp}\")\n",
" \n",
" else:\n",
" print(\"❌ Failed to capture from one or both cameras\")\n",
" \n",
" # Test synchronized recording\n",
" print(f\"\\n🎥 Testing synchronized recording for {duration} seconds...\")\n",
" \n",
" # Setup video writers\n",
" timestamp = datetime.now().strftime(\"%Y%m%d_%H%M%S\")\n",
" \n",
" fourcc = cv2.VideoWriter_fourcc(*'mp4v')\n",
" fps = 30\n",
" \n",
" if ret1:\n",
" h1, w1 = frame1.shape[:2]\n",
" out1 = cv2.VideoWriter(f\"/storage/camera1/sync_test_{timestamp}.mp4\", fourcc, fps, (w1, h1))\n",
" \n",
" if ret2:\n",
" h2, w2 = frame2.shape[:2]\n",
" out2 = cv2.VideoWriter(f\"/storage/camera2/sync_test_{timestamp}.mp4\", fourcc, fps, (w2, h2))\n",
" \n",
" # Record synchronized video\n",
" start_time = time.time()\n",
" frame_count = 0\n",
" \n",
" while time.time() - start_time < duration:\n",
" ret1, frame1 = cap1.read()\n",
" ret2, frame2 = cap2.read()\n",
" \n",
" if ret1 and ret2:\n",
" out1.write(frame1)\n",
" out2.write(frame2)\n",
" frame_count += 1\n",
" else:\n",
" print(f\"⚠️ Frame drop at frame {frame_count}\")\n",
" \n",
" # Cleanup\n",
" cap1.release()\n",
" cap2.release()\n",
" if 'out1' in locals():\n",
" out1.release()\n",
" if 'out2' in locals():\n",
" out2.release()\n",
" \n",
" elapsed = time.time() - start_time\n",
" actual_fps = frame_count / elapsed\n",
" \n",
" print(f\"✅ Synchronized recording complete\")\n",
" print(f\"📊 Recorded {frame_count} frames in {elapsed:.2f}s\")\n",
" print(f\"📊 Actual FPS: {actual_fps:.2f}\")\n",
" print(f\"💾 Videos saved with timestamp {timestamp}\")\n",
"\n",
"# Test dual cameras (adjust camera IDs as needed)\n",
"test_dual_cameras(0, 1, duration=3)"
]
}
],
"metadata": {
"kernelspec": {
"display_name": "usda-vision-cameras",
"language": "python",
"name": "usda-vision-cameras"
},
"language_info": {
"codemirror_mode": {
"name": "ipython",
"version": 3
},
"file_extension": ".py",
"mimetype": "text/x-python",
"name": "python",
"nbconvert_exporter": "python",
"pygments_lexer": "ipython3",
"version": "3.11.0"
}
},
"nbformat": 4,
"nbformat_minor": 4
}

View File

@@ -0,0 +1,146 @@
{
"cells": [
{
"cell_type": "code",
"execution_count": 1,
"id": "3b92c632",
"metadata": {},
"outputs": [],
"source": [
"import paho.mqtt.client as mqtt\n",
"import time\n"
]
},
{
"cell_type": "code",
"execution_count": 2,
"id": "a6753fb1",
"metadata": {},
"outputs": [
{
"name": "stderr",
"output_type": "stream",
"text": [
"/tmp/ipykernel_2342/243927247.py:34: DeprecationWarning: Callback API version 1 is deprecated, update to latest version\n",
" client = mqtt.Client(mqtt.CallbackAPIVersion.VERSION1) # Use VERSION1 for broader compatibility\n"
]
},
{
"name": "stdout",
"output_type": "stream",
"text": [
"Connecting to MQTT broker at 192.168.1.110:1883...\n",
"Successfully connected to MQTT Broker!\n",
"Subscribed to topic: 'vision/vibratory_conveyor/state'\n",
"Listening for messages... (Press Ctrl+C to stop)\n",
"\n",
"--- MQTT MESSAGE RECEIVED! ---\n",
" Topic: vision/vibratory_conveyor/state\n",
" Payload: on\n",
" Time: 2025-07-25 21:03:21\n",
"------------------------------\n",
"\n",
"\n",
"--- MQTT MESSAGE RECEIVED! ---\n",
" Topic: vision/vibratory_conveyor/state\n",
" Payload: off\n",
" Time: 2025-07-25 21:05:26\n",
"------------------------------\n",
"\n",
"\n",
"Stopping MQTT listener.\n"
]
}
],
"source": [
"\n",
"# --- MQTT Broker Configuration ---\n",
"# Your Home Assistant's IP address (where your MQTT broker is running)\n",
"MQTT_BROKER_HOST = \"192.168.1.110\"\n",
"MQTT_BROKER_PORT = 1883\n",
"# IMPORTANT: Replace with your actual MQTT broker username and password if you have one set up\n",
"# (These are NOT your Home Assistant login credentials, but for the Mosquitto add-on, if used)\n",
"# MQTT_BROKER_USERNAME = \"pecan\" # e.g., \"homeassistant_mqtt_user\"\n",
"# MQTT_BROKER_PASSWORD = \"whatever\" # e.g., \"SuperSecurePassword123!\"\n",
"\n",
"# --- Topic to Subscribe To ---\n",
"# This MUST exactly match the topic you set in your Home Assistant automation\n",
"MQTT_TOPIC = \"vision/vibratory_conveyor/state\" # <<<< Make sure this is correct!\n",
"MQTT_TOPIC = \"vision/blower_separator/state\" # <<<< Make sure this is correct!\n",
"\n",
"# The callback for when the client receives a CONNACK response from the server.\n",
"def on_connect(client, userdata, flags, rc):\n",
" if rc == 0:\n",
" print(\"Successfully connected to MQTT Broker!\")\n",
" client.subscribe(MQTT_TOPIC)\n",
" print(f\"Subscribed to topic: '{MQTT_TOPIC}'\")\n",
" print(\"Listening for messages... (Press Ctrl+C to stop)\")\n",
" else:\n",
" print(f\"Failed to connect, return code {rc}\\n\")\n",
"\n",
"# The callback for when a PUBLISH message is received from the server.\n",
"def on_message(client, userdata, msg):\n",
" received_payload = msg.payload.decode()\n",
" print(f\"\\n--- MQTT MESSAGE RECEIVED! ---\")\n",
" print(f\" Topic: {msg.topic}\")\n",
" print(f\" Payload: {received_payload}\")\n",
" print(f\" Time: {time.strftime('%Y-%m-%d %H:%M:%S')}\")\n",
" print(f\"------------------------------\\n\")\n",
"\n",
"# Create an MQTT client instance\n",
"client = mqtt.Client(mqtt.CallbackAPIVersion.VERSION1) # Use VERSION1 for broader compatibility\n",
"\n",
"# Set callback functions\n",
"client.on_connect = on_connect\n",
"client.on_message = on_message\n",
"\n",
"# Set username and password if required\n",
"# (Only uncomment and fill these if your MQTT broker requires authentication)\n",
"# client.username_pw_set(MQTT_BROKER_USERNAME, MQTT_BROKER_PASSWORD)\n",
"\n",
"try:\n",
" # Attempt to connect to the MQTT broker\n",
" print(f\"Connecting to MQTT broker at {MQTT_BROKER_HOST}:{MQTT_BROKER_PORT}...\")\n",
" client.connect(MQTT_BROKER_HOST, MQTT_BROKER_PORT, 60)\n",
"\n",
" # Start the MQTT loop. This runs in the background and processes messages.\n",
" client.loop_forever()\n",
"\n",
"except KeyboardInterrupt:\n",
" print(\"\\nStopping MQTT listener.\")\n",
" client.disconnect() # Disconnect gracefully\n",
"except Exception as e:\n",
" print(f\"An unexpected error occurred: {e}\")"
]
},
{
"cell_type": "code",
"execution_count": null,
"id": "56531671",
"metadata": {},
"outputs": [],
"source": []
}
],
"metadata": {
"kernelspec": {
"display_name": "USDA-vision-cameras",
"language": "python",
"name": "python3"
},
"language_info": {
"codemirror_mode": {
"name": "ipython",
"version": 3
},
"file_extension": ".py",
"mimetype": "text/x-python",
"name": "python",
"nbconvert_exporter": "python",
"pygments_lexer": "ipython3",
"version": "3.11.2"
}
},
"nbformat": 4,
"nbformat_minor": 5
}

View File

@@ -0,0 +1,175 @@
# Instructions for AI Agent: Auto-Recording Feature Integration
## 🎯 Task Overview
Update the React application to support the new auto-recording feature that has been added to the USDA Vision Camera System backend.
## 📋 What You Need to Know
### System Context
- **Camera 1** monitors the **vibratory conveyor** (conveyor/cracker cam)
- **Camera 2** monitors the **blower separator** machine
- Auto-recording automatically starts when machines turn ON and stops when they turn OFF
- The system includes retry logic for failed recording attempts
- Manual recording always takes precedence over auto-recording
### New Backend Capabilities
The backend now supports:
1. **Automatic recording** triggered by MQTT machine state changes
2. **Retry mechanism** for failed recording attempts (configurable retries and delays)
3. **Status tracking** for auto-recording state, failures, and attempts
4. **API endpoints** for enabling/disabling and monitoring auto-recording
## 🔧 Required React App Changes
### 1. Update TypeScript Interfaces
Add these new fields to existing `CameraStatusResponse`:
```typescript
interface CameraStatusResponse {
// ... existing fields
auto_recording_enabled: boolean;
auto_recording_active: boolean;
auto_recording_failure_count: number;
auto_recording_last_attempt?: string;
auto_recording_last_error?: string;
}
```
Add new response types:
```typescript
interface AutoRecordingConfigResponse {
success: boolean;
message: string;
camera_name: string;
enabled: boolean;
}
interface AutoRecordingStatusResponse {
running: boolean;
auto_recording_enabled: boolean;
retry_queue: Record<string, any>;
enabled_cameras: string[];
}
```
### 2. Add New API Endpoints
```typescript
// Enable auto-recording for a camera
POST /cameras/{camera_name}/auto-recording/enable
// Disable auto-recording for a camera
POST /cameras/{camera_name}/auto-recording/disable
// Get overall auto-recording system status
GET /auto-recording/status
```
### 3. UI Components to Add/Update
#### Camera Status Display
- Add auto-recording status badge/indicator
- Show auto-recording enabled/disabled state
- Display failure count if > 0
- Show last error message if any
- Distinguish between manual and auto-recording states
#### Auto-Recording Controls
- Toggle switch to enable/disable auto-recording per camera
- System-wide auto-recording status display
- Retry queue information
- Machine state correlation display
#### Error Handling
- Clear display of auto-recording failures
- Retry attempt information
- Last attempt timestamp
- Quick retry/reset actions
### 4. Visual Design Guidelines
**Status Priority (highest to lowest):**
1. Manual Recording (red/prominent) - user initiated
2. Auto-Recording Active (green) - machine ON, recording
3. Auto-Recording Enabled (blue) - ready but machine OFF
4. Auto-Recording Disabled (gray) - feature disabled
**Machine Correlation:**
- Show machine name next to camera (e.g., "Vibratory Conveyor", "Blower Separator")
- Display machine ON/OFF status
- Alert if machine is ON but auto-recording failed
## 🎨 Specific Implementation Tasks
### Task 1: Update Camera Cards
- Add auto-recording status indicators
- Add enable/disable toggle controls
- Show machine state correlation
- Display failure information when relevant
### Task 2: Create Auto-Recording Dashboard
- Overall system status
- List of enabled cameras
- Active retry queue display
- Recent events/errors
### Task 3: Update Recording Status Logic
- Distinguish between manual and auto-recording
- Show appropriate controls based on recording type
- Handle manual override scenarios
### Task 4: Add Error Handling
- Display auto-recording failures clearly
- Show retry attempts and timing
- Provide manual retry options
## 📱 User Experience Requirements
### Key Behaviors
1. **Non-Intrusive:** Auto-recording status shouldn't clutter the main interface
2. **Clear Hierarchy:** Manual controls should be more prominent than auto-recording
3. **Informative:** Users should understand why recording started/stopped
4. **Actionable:** Clear options to enable/disable or retry failed attempts
### Mobile Considerations
- Auto-recording controls should work well on mobile
- Status information should be readable on small screens
- Consider collapsible sections for detailed information
## 🔍 Testing Requirements
Ensure the React app correctly handles:
- [ ] Toggling auto-recording on/off per camera
- [ ] Displaying real-time status updates
- [ ] Showing error states and retry information
- [ ] Manual recording override scenarios
- [ ] Machine state changes and correlation
- [ ] Mobile interface functionality
## 📚 Reference Files
Key files to review for implementation details:
- `AUTO_RECORDING_FEATURE_GUIDE.md` - Comprehensive technical details
- `api-endpoints.http` - API endpoint documentation
- `config.json` - Configuration structure
- `usda_vision_system/api/models.py` - Response type definitions
## 🎯 Success Criteria
The React app should:
1. **Display** auto-recording status for each camera clearly
2. **Allow** users to enable/disable auto-recording per camera
3. **Show** machine state correlation and recording triggers
4. **Handle** error states and retry scenarios gracefully
5. **Maintain** existing manual recording functionality
6. **Provide** clear visual hierarchy between manual and auto-recording
## 💡 Implementation Tips
1. **Start Small:** Begin with basic status display, then add controls
2. **Use Existing Patterns:** Follow the current app's design patterns
3. **Test Incrementally:** Test each feature as you add it
4. **Consider State Management:** Update your state management to handle new data
5. **Mobile First:** Ensure mobile usability from the start
The goal is to seamlessly integrate auto-recording capabilities while maintaining the existing user experience and adding valuable automation features for the camera operators.

View File

@@ -0,0 +1,595 @@
# 🤖 AI Integration Guide: USDA Vision Camera Streaming for React Projects
This guide is specifically designed for AI assistants to understand and implement the USDA Vision Camera streaming functionality in React applications.
## 📋 System Overview
The USDA Vision Camera system provides live video streaming through REST API endpoints. The streaming uses MJPEG format which is natively supported by HTML `<img>` tags and can be easily integrated into React components.
### Key Characteristics:
- **Base URL**: `http://vision:8000` (production) or `http://localhost:8000` (development)
- **Stream Format**: MJPEG (Motion JPEG)
- **Content-Type**: `multipart/x-mixed-replace; boundary=frame`
- **Authentication**: None (add if needed for production)
- **CORS**: Enabled for all origins (configure for production)
### Base URL Configuration:
- **Production**: `http://vision:8000` (requires hostname setup)
- **Development**: `http://localhost:8000` (local testing)
- **Custom IP**: `http://192.168.1.100:8000` (replace with actual IP)
- **Custom hostname**: Configure DNS or /etc/hosts as needed
## 🔌 API Endpoints Reference
### 1. Get Camera List
```http
GET /cameras
```
**Response:**
```json
{
"camera1": {
"name": "camera1",
"status": "connected",
"is_recording": false,
"last_checked": "2025-01-28T10:30:00",
"device_info": {...}
},
"camera2": {...}
}
```
### 2. Start Camera Stream
```http
POST /cameras/{camera_name}/start-stream
```
**Response:**
```json
{
"success": true,
"message": "Started streaming for camera camera1"
}
```
### 3. Stop Camera Stream
```http
POST /cameras/{camera_name}/stop-stream
```
**Response:**
```json
{
"success": true,
"message": "Stopped streaming for camera camera1"
}
```
### 4. Live Video Stream
```http
GET /cameras/{camera_name}/stream
```
**Response:** MJPEG video stream
**Usage:** Set as `src` attribute of HTML `<img>` element
## ⚛️ React Integration Examples
### Basic Camera Stream Component
```jsx
import React, { useState, useEffect } from 'react';
const CameraStream = ({ cameraName, apiBaseUrl = 'http://vision:8000' }) => {
const [isStreaming, setIsStreaming] = useState(false);
const [error, setError] = useState(null);
const [loading, setLoading] = useState(false);
const startStream = async () => {
setLoading(true);
setError(null);
try {
const response = await fetch(`${apiBaseUrl}/cameras/${cameraName}/start-stream`, {
method: 'POST',
headers: {
'Content-Type': 'application/json',
},
});
if (response.ok) {
setIsStreaming(true);
} else {
const errorData = await response.json();
setError(errorData.detail || 'Failed to start stream');
}
} catch (err) {
setError(`Network error: ${err.message}`);
} finally {
setLoading(false);
}
};
const stopStream = async () => {
setLoading(true);
try {
const response = await fetch(`${apiBaseUrl}/cameras/${cameraName}/stop-stream`, {
method: 'POST',
headers: {
'Content-Type': 'application/json',
},
});
if (response.ok) {
setIsStreaming(false);
} else {
const errorData = await response.json();
setError(errorData.detail || 'Failed to stop stream');
}
} catch (err) {
setError(`Network error: ${err.message}`);
} finally {
setLoading(false);
}
};
return (
<div className="camera-stream">
<h3>Camera: {cameraName}</h3>
{/* Video Stream */}
<div className="stream-container">
{isStreaming ? (
<img
src={`${apiBaseUrl}/cameras/${cameraName}/stream?t=${Date.now()}`}
alt={`${cameraName} live stream`}
style={{
width: '100%',
maxWidth: '640px',
height: 'auto',
border: '2px solid #ddd',
borderRadius: '8px',
}}
onError={() => setError('Stream connection lost')}
/>
) : (
<div style={{
width: '100%',
maxWidth: '640px',
height: '360px',
backgroundColor: '#f0f0f0',
display: 'flex',
alignItems: 'center',
justifyContent: 'center',
border: '2px solid #ddd',
borderRadius: '8px',
}}>
<span>No Stream Active</span>
</div>
)}
</div>
{/* Controls */}
<div className="stream-controls" style={{ marginTop: '10px' }}>
<button
onClick={startStream}
disabled={loading || isStreaming}
style={{
padding: '8px 16px',
marginRight: '8px',
backgroundColor: '#28a745',
color: 'white',
border: 'none',
borderRadius: '4px',
cursor: loading ? 'not-allowed' : 'pointer',
}}
>
{loading ? 'Loading...' : 'Start Stream'}
</button>
<button
onClick={stopStream}
disabled={loading || !isStreaming}
style={{
padding: '8px 16px',
backgroundColor: '#dc3545',
color: 'white',
border: 'none',
borderRadius: '4px',
cursor: loading ? 'not-allowed' : 'pointer',
}}
>
{loading ? 'Loading...' : 'Stop Stream'}
</button>
</div>
{/* Error Display */}
{error && (
<div style={{
marginTop: '10px',
padding: '8px',
backgroundColor: '#f8d7da',
color: '#721c24',
border: '1px solid #f5c6cb',
borderRadius: '4px',
}}>
Error: {error}
</div>
)}
</div>
);
};
export default CameraStream;
```
### Multi-Camera Dashboard Component
```jsx
import React, { useState, useEffect } from 'react';
import CameraStream from './CameraStream';
const CameraDashboard = ({ apiBaseUrl = 'http://vision:8000' }) => {
const [cameras, setCameras] = useState({});
const [loading, setLoading] = useState(true);
const [error, setError] = useState(null);
useEffect(() => {
fetchCameras();
// Refresh camera status every 30 seconds
const interval = setInterval(fetchCameras, 30000);
return () => clearInterval(interval);
}, []);
const fetchCameras = async () => {
try {
const response = await fetch(`${apiBaseUrl}/cameras`);
if (response.ok) {
const data = await response.json();
setCameras(data);
setError(null);
} else {
setError('Failed to fetch cameras');
}
} catch (err) {
setError(`Network error: ${err.message}`);
} finally {
setLoading(false);
}
};
if (loading) {
return <div>Loading cameras...</div>;
}
if (error) {
return (
<div style={{ color: 'red', padding: '20px' }}>
Error: {error}
<button onClick={fetchCameras} style={{ marginLeft: '10px' }}>
Retry
</button>
</div>
);
}
return (
<div className="camera-dashboard">
<h1>USDA Vision Camera Dashboard</h1>
<div style={{
display: 'grid',
gridTemplateColumns: 'repeat(auto-fit, minmax(400px, 1fr))',
gap: '20px',
padding: '20px',
}}>
{Object.entries(cameras).map(([cameraName, cameraInfo]) => (
<div key={cameraName} style={{
border: '1px solid #ddd',
borderRadius: '8px',
padding: '15px',
backgroundColor: '#f9f9f9',
}}>
<CameraStream
cameraName={cameraName}
apiBaseUrl={apiBaseUrl}
/>
{/* Camera Status */}
<div style={{ marginTop: '10px', fontSize: '14px' }}>
<div>Status: <strong>{cameraInfo.status}</strong></div>
<div>Recording: <strong>{cameraInfo.is_recording ? 'Yes' : 'No'}</strong></div>
<div>Last Checked: {new Date(cameraInfo.last_checked).toLocaleString()}</div>
</div>
</div>
))}
</div>
</div>
);
};
export default CameraDashboard;
```
### Custom Hook for Camera Management
```jsx
import { useState, useEffect, useCallback } from 'react';
const useCameraStream = (cameraName, apiBaseUrl = 'http://vision:8000') => {
const [isStreaming, setIsStreaming] = useState(false);
const [loading, setLoading] = useState(false);
const [error, setError] = useState(null);
const startStream = useCallback(async () => {
setLoading(true);
setError(null);
try {
const response = await fetch(`${apiBaseUrl}/cameras/${cameraName}/start-stream`, {
method: 'POST',
});
if (response.ok) {
setIsStreaming(true);
return { success: true };
} else {
const errorData = await response.json();
const errorMsg = errorData.detail || 'Failed to start stream';
setError(errorMsg);
return { success: false, error: errorMsg };
}
} catch (err) {
const errorMsg = `Network error: ${err.message}`;
setError(errorMsg);
return { success: false, error: errorMsg };
} finally {
setLoading(false);
}
}, [cameraName, apiBaseUrl]);
const stopStream = useCallback(async () => {
setLoading(true);
try {
const response = await fetch(`${apiBaseUrl}/cameras/${cameraName}/stop-stream`, {
method: 'POST',
});
if (response.ok) {
setIsStreaming(false);
return { success: true };
} else {
const errorData = await response.json();
const errorMsg = errorData.detail || 'Failed to stop stream';
setError(errorMsg);
return { success: false, error: errorMsg };
}
} catch (err) {
const errorMsg = `Network error: ${err.message}`;
setError(errorMsg);
return { success: false, error: errorMsg };
} finally {
setLoading(false);
}
}, [cameraName, apiBaseUrl]);
const getStreamUrl = useCallback(() => {
return `${apiBaseUrl}/cameras/${cameraName}/stream?t=${Date.now()}`;
}, [cameraName, apiBaseUrl]);
return {
isStreaming,
loading,
error,
startStream,
stopStream,
getStreamUrl,
};
};
export default useCameraStream;
```
## 🎨 Styling with Tailwind CSS
```jsx
const CameraStreamTailwind = ({ cameraName }) => {
const { isStreaming, loading, error, startStream, stopStream, getStreamUrl } = useCameraStream(cameraName);
return (
<div className="bg-white rounded-lg shadow-md p-6">
<h3 className="text-lg font-semibold mb-4">Camera: {cameraName}</h3>
{/* Stream Container */}
<div className="relative mb-4">
{isStreaming ? (
<img
src={getStreamUrl()}
alt={`${cameraName} live stream`}
className="w-full max-w-2xl h-auto border-2 border-gray-300 rounded-lg"
onError={() => setError('Stream connection lost')}
/>
) : (
<div className="w-full max-w-2xl h-64 bg-gray-100 border-2 border-gray-300 rounded-lg flex items-center justify-center">
<span className="text-gray-500">No Stream Active</span>
</div>
)}
</div>
{/* Controls */}
<div className="flex gap-2 mb-4">
<button
onClick={startStream}
disabled={loading || isStreaming}
className="px-4 py-2 bg-green-500 text-white rounded hover:bg-green-600 disabled:opacity-50 disabled:cursor-not-allowed"
>
{loading ? 'Loading...' : 'Start Stream'}
</button>
<button
onClick={stopStream}
disabled={loading || !isStreaming}
className="px-4 py-2 bg-red-500 text-white rounded hover:bg-red-600 disabled:opacity-50 disabled:cursor-not-allowed"
>
{loading ? 'Loading...' : 'Stop Stream'}
</button>
</div>
{/* Error Display */}
{error && (
<div className="p-3 bg-red-100 border border-red-400 text-red-700 rounded">
Error: {error}
</div>
)}
</div>
);
};
```
## 🔧 Configuration Options
### Environment Variables (.env)
```env
# Production configuration (using 'vision' hostname)
REACT_APP_CAMERA_API_URL=http://vision:8000
REACT_APP_STREAM_REFRESH_INTERVAL=30000
REACT_APP_STREAM_TIMEOUT=10000
# Development configuration (using localhost)
# REACT_APP_CAMERA_API_URL=http://localhost:8000
# Custom IP configuration
# REACT_APP_CAMERA_API_URL=http://192.168.1.100:8000
```
### API Configuration
```javascript
const apiConfig = {
baseUrl: process.env.REACT_APP_CAMERA_API_URL || 'http://vision:8000',
timeout: parseInt(process.env.REACT_APP_STREAM_TIMEOUT) || 10000,
refreshInterval: parseInt(process.env.REACT_APP_STREAM_REFRESH_INTERVAL) || 30000,
};
```
### Hostname Setup Guide
```bash
# Option 1: Add to /etc/hosts (Linux/Mac)
echo "127.0.0.1 vision" | sudo tee -a /etc/hosts
# Option 2: Add to hosts file (Windows)
# Add to C:\Windows\System32\drivers\etc\hosts:
# 127.0.0.1 vision
# Option 3: Configure DNS
# Point 'vision' hostname to your server's IP address
# Verify hostname resolution
ping vision
```
## 🚨 Important Implementation Notes
### 1. MJPEG Stream Handling
- Use HTML `<img>` tag with `src` pointing to stream endpoint
- Add timestamp query parameter to prevent caching: `?t=${Date.now()}`
- Handle `onError` event for connection issues
### 2. Error Handling
- Network errors (fetch failures)
- HTTP errors (4xx, 5xx responses)
- Stream connection errors (img onError)
- Timeout handling for long requests
### 3. Performance Considerations
- Streams consume bandwidth continuously
- Stop streams when components unmount
- Limit concurrent streams based on system capacity
- Consider lazy loading for multiple cameras
### 4. State Management
- Track streaming state per camera
- Handle loading states during API calls
- Manage error states with user feedback
- Refresh camera list periodically
## 📱 Mobile Considerations
```jsx
// Responsive design for mobile
const mobileStyles = {
container: {
padding: '10px',
maxWidth: '100vw',
},
stream: {
width: '100%',
maxWidth: '100vw',
height: 'auto',
},
controls: {
display: 'flex',
flexDirection: 'column',
gap: '8px',
},
};
```
## 🧪 Testing Integration
```javascript
// Test API connectivity
const testConnection = async () => {
try {
const response = await fetch(`${apiBaseUrl}/health`);
return response.ok;
} catch {
return false;
}
};
// Test camera availability
const testCamera = async (cameraName) => {
try {
const response = await fetch(`${apiBaseUrl}/cameras/${cameraName}/test-connection`, {
method: 'POST',
});
return response.ok;
} catch {
return false;
}
};
```
## 📁 Additional Files for AI Integration
### TypeScript Definitions
- `camera-api.types.ts` - Complete TypeScript definitions for all API types
- `streaming-api.http` - REST Client file with all streaming endpoints
- `STREAMING_GUIDE.md` - Comprehensive user guide for streaming functionality
### Quick Integration Checklist for AI Assistants
1. **Copy TypeScript types** from `camera-api.types.ts`
2. **Use API endpoints** from `streaming-api.http`
3. **Implement error handling** as shown in examples
4. **Add CORS configuration** if needed for production
5. **Test with multiple cameras** using provided examples
### Key Integration Points
- **Stream URL Format**: `${baseUrl}/cameras/${cameraName}/stream?t=${Date.now()}`
- **Start Stream**: `POST /cameras/{name}/start-stream`
- **Stop Stream**: `POST /cameras/{name}/stop-stream`
- **Camera List**: `GET /cameras`
- **Error Handling**: Always wrap in try-catch blocks
- **Loading States**: Implement for better UX
### Production Considerations
- Configure CORS for specific origins
- Add authentication if required
- Implement rate limiting
- Monitor system resources with multiple streams
- Add reconnection logic for network issues
This documentation provides everything an AI assistant needs to integrate the USDA Vision Camera streaming functionality into React applications, including complete code examples, error handling, and best practices.

View File

@@ -0,0 +1,542 @@
###############################################################################
# USDA Vision Camera System - Complete API Endpoints Documentation
#
# CONFIGURATION:
# - Default Base URL: http://localhost:8000 (local development)
# - Production Base URL: http://vision:8000 (when using hostname 'vision')
# - Custom hostname: Update @baseUrl variable below
#
# HOSTNAME SETUP:
# To use 'vision' hostname instead of 'localhost':
# 1. Add to /etc/hosts: 127.0.0.1 vision
# 2. Or configure DNS to point 'vision' to the server IP
# 3. Update camera_preview.html: API_BASE = 'http://vision:8000'
###############################################################################
# Base URL Configuration - Change this to match your setup
@baseUrl = http://vision:8000
# Alternative configurations:
# @baseUrl = http://localhost:8000 # Local development
# @baseUrl = http://192.168.1.100:8000 # Specific IP address
# @baseUrl = http://your-server:8000 # Custom hostname
###############################################################################
# CONFIGURATION GUIDE
###############################################################################
### HOSTNAME CONFIGURATION OPTIONS:
# Option 1: Using 'vision' hostname (recommended for production)
# - Requires hostname resolution setup
# - Add to /etc/hosts: 127.0.0.1 vision
# - Or configure DNS: vision -> server IP address
# - Update camera_preview.html: API_BASE = 'http://vision:8000'
# - Set @baseUrl = http://vision:8000
# Option 2: Using localhost (development)
# - Works immediately on local machine
# - Set @baseUrl = http://localhost:8000
# - Update camera_preview.html: API_BASE = 'http://localhost:8000'
# Option 3: Using specific IP address
# - Replace with actual server IP
# - Set @baseUrl = http://192.168.1.100:8000
# - Update camera_preview.html: API_BASE = 'http://192.168.1.100:8000'
# Option 4: Custom hostname
# - Configure DNS or /etc/hosts for custom name
# - Set @baseUrl = http://your-custom-name:8000
# - Update camera_preview.html: API_BASE = 'http://your-custom-name:8000'
### NETWORK CONFIGURATION:
# - Default port: 8000
# - CORS enabled for all origins (configure for production)
# - No authentication required (add if needed)
### CLIENT CONFIGURATION FILES TO UPDATE:
# 1. camera_preview.html - Update API_BASE constant
# 2. React projects - Update apiConfig.baseUrl
# 3. This file - Update @baseUrl variable
# 4. Any custom scripts - Update base URL
### TESTING CONNECTIVITY:
# Test if the API is reachable:
GET {{baseUrl}}/health
###############################################################################
# SYSTEM ENDPOINTS
###############################################################################
### Root endpoint - API information
GET {{baseUrl}}/
# Response: SuccessResponse
# {
# "success": true,
# "message": "USDA Vision Camera System API",
# "data": null,
# "timestamp": "2025-07-28T12:00:00"
# }
###
### Health check
GET http://localhost:8000/health
# Response: Simple health status
# {
# "status": "healthy",
# "timestamp": "2025-07-28T12:00:00"
# }
###
### Get system status
GET http://localhost:8000/system/status
# Response: SystemStatusResponse
# {
# "system_started": true,
# "mqtt_connected": true,
# "last_mqtt_message": "2025-07-28T12:00:00",
# "machines": {
# "vibratory_conveyor": {
# "name": "vibratory_conveyor",
# "state": "off",
# "last_updated": "2025-07-28T12:00:00"
# }
# },
# "cameras": {
# "camera1": {
# "name": "camera1",
# "status": "connected",
# "is_recording": false
# }
# },
# "active_recordings": 0,
# "total_recordings": 5,
# "uptime_seconds": 3600.5
# }
###############################################################################
# MACHINE ENDPOINTS
###############################################################################
### Get all machines status
GET http://localhost:8000/machines
# Response: Dict[str, MachineStatusResponse]
# {
# "vibratory_conveyor": {
# "name": "vibratory_conveyor",
# "state": "off",
# "last_updated": "2025-07-28T12:00:00",
# "last_message": "off",
# "mqtt_topic": "vision/vibratory_conveyor/state"
# },
# "blower_separator": {
# "name": "blower_separator",
# "state": "on",
# "last_updated": "2025-07-28T12:00:00",
# "last_message": "on",
# "mqtt_topic": "vision/blower_separator/state"
# }
# }
###############################################################################
# MQTT ENDPOINTS
###############################################################################
### Get MQTT status and statistics
GET http://localhost:8000/mqtt/status
# Response: MQTTStatusResponse
# {
# "connected": true,
# "broker_host": "192.168.1.110",
# "broker_port": 1883,
# "subscribed_topics": [
# "vision/vibratory_conveyor/state",
# "vision/blower_separator/state"
# ],
# "last_message_time": "2025-07-28T12:00:00",
# "message_count": 42,
# "error_count": 0,
# "uptime_seconds": 3600.5
# }
### Get recent MQTT events history
GET http://localhost:8000/mqtt/events
# Optional query parameter: limit (default: 5, max: 50)
# Response: MQTTEventsHistoryResponse
# {
# "events": [
# {
# "machine_name": "vibratory_conveyor",
# "topic": "vision/vibratory_conveyor/state",
# "payload": "on",
# "normalized_state": "on",
# "timestamp": "2025-07-28T15:30:45.123456",
# "message_number": 15
# },
# {
# "machine_name": "blower_separator",
# "topic": "vision/blower_separator/state",
# "payload": "off",
# "normalized_state": "off",
# "timestamp": "2025-07-28T15:29:12.654321",
# "message_number": 14
# }
# ],
# "total_events": 15,
# "last_updated": "2025-07-28T15:30:45.123456"
# }
### Get recent MQTT events with custom limit
GET http://localhost:8000/mqtt/events?limit=10
###############################################################################
# CAMERA ENDPOINTS
###############################################################################
### Get all cameras status
GET http://localhost:8000/cameras
# Response: Dict[str, CameraStatusResponse]
# {
# "camera1": {
# "name": "camera1",
# "status": "connected",
# "is_recording": false,
# "last_checked": "2025-07-28T12:00:00",
# "last_error": null,
# "device_info": {
# "friendly_name": "MindVision Camera",
# "serial_number": "ABC123"
# },
# "current_recording_file": null,
# "recording_start_time": null
# }
# }
###
### Get specific camera status
GET http://localhost:8000/cameras/camera1/status
### Get specific camera status
GET http://localhost:8000/cameras/camera2/status
# Response: CameraStatusResponse (same as above for single camera)
###############################################################################
# RECORDING CONTROL ENDPOINTS
###############################################################################
### Start recording (with all optional parameters)
POST http://localhost:8000/cameras/camera1/start-recording
Content-Type: application/json
{
"filename": "test_recording.avi",
"exposure_ms": 1.5,
"gain": 3.0,
"fps": 10.0
}
# Request Parameters (all optional):
# - filename: string - Custom filename (datetime prefix auto-added)
# - exposure_ms: float - Exposure time in milliseconds
# - gain: float - Camera gain value
# - fps: float - Target frames per second (0 = maximum speed, omit = use config default)
#
# Response: StartRecordingResponse
# {
# "success": true,
# "message": "Recording started for camera1",
# "filename": "20250728_120000_test_recording.avi"
# }
###
### Start recording (minimal - only filename)
POST http://localhost:8000/cameras/camera1/start-recording
Content-Type: application/json
{
"filename": "simple_test.avi"
}
###
### Start recording (only camera settings)
POST http://localhost:8000/cameras/camera1/start-recording
Content-Type: application/json
{
"exposure_ms": 2.0,
"gain": 4.0,
"fps": 0
}
###
### Start recording (empty body - all defaults)
POST http://localhost:8000/cameras/camera1/start-recording
Content-Type: application/json
{}
###
### Stop recording
POST http://localhost:8000/cameras/camera1/stop-recording
POST http://localhost:8000/cameras/camera2/stop-recording
# No request body required
# Response: StopRecordingResponse
# {
# "success": true,
# "message": "Recording stopped for camera1",
# "duration_seconds": 45.2
# }
###############################################################################
# AUTO-RECORDING CONTROL ENDPOINTS
###############################################################################
### Enable auto-recording for a camera
POST http://localhost:8000/cameras/camera1/auto-recording/enable
POST http://localhost:8000/cameras/camera2/auto-recording/enable
# No request body required
# Response: AutoRecordingConfigResponse
# {
# "success": true,
# "message": "Auto-recording enabled for camera1",
# "camera_name": "camera1",
# "enabled": true
# }
###
### Disable auto-recording for a camera
POST http://localhost:8000/cameras/camera1/auto-recording/disable
POST http://localhost:8000/cameras/camera2/auto-recording/disable
# No request body required
# Response: AutoRecordingConfigResponse
# {
# "success": true,
# "message": "Auto-recording disabled for camera1",
# "camera_name": "camera1",
# "enabled": false
# }
###
### Get auto-recording manager status
GET http://localhost:8000/auto-recording/status
# Response: AutoRecordingStatusResponse
# {
# "running": true,
# "auto_recording_enabled": true,
# "retry_queue": {},
# "enabled_cameras": ["camera1", "camera2"]
# }
###############################################################################
# CAMERA RECOVERY & DIAGNOSTICS ENDPOINTS
###############################################################################
### Test camera connection
POST http://localhost:8000/cameras/camera1/test-connection
POST http://localhost:8000/cameras/camera2/test-connection
# No request body required
# Response: CameraTestResponse
# {
# "success": true,
# "message": "Camera camera1 connection test passed",
# "camera_name": "camera1",
# "timestamp": "2025-07-28T12:00:00"
# }
###
### Reconnect camera (soft recovery)
POST http://localhost:8000/cameras/camera1/reconnect
POST http://localhost:8000/cameras/camera2/reconnect
# No request body required
# Response: CameraRecoveryResponse
# {
# "success": true,
# "message": "Camera camera1 reconnected successfully",
# "camera_name": "camera1",
# "operation": "reconnect",
# "timestamp": "2025-07-28T12:00:00"
# }
###
### Restart camera grab process
POST http://localhost:8000/cameras/camera1/restart-grab
POST http://localhost:8000/cameras/camera2/restart-grab
# Response: CameraRecoveryResponse (same structure as reconnect)
###
### Reset camera timestamp
POST http://localhost:8000/cameras/camera1/reset-timestamp
POST http://localhost:8000/cameras/camera2/reset-timestamp
# Response: CameraRecoveryResponse (same structure as reconnect)
###
### Full camera reset (hard recovery)
POST http://localhost:8000/cameras/camera1/full-reset
### Full camera reset (hard recovery)
POST http://localhost:8000/cameras/camera2/full-reset
# Response: CameraRecoveryResponse (same structure as reconnect)
###
### Reinitialize failed camera
POST http://localhost:8000/cameras/camera1/reinitialize
POST http://localhost:8000/cameras/camera2/reinitialize
# Response: CameraRecoveryResponse (same structure as reconnect)
###############################################################################
# RECORDING SESSIONS ENDPOINT
###############################################################################
### Get all recording sessions
GET http://localhost:8000/recordings
# Response: Dict[str, RecordingInfoResponse]
# {
# "rec_001": {
# "camera_name": "camera1",
# "filename": "20250728_120000_test.avi",
# "start_time": "2025-07-28T12:00:00",
# "state": "completed",
# "end_time": "2025-07-28T12:05:00",
# "file_size_bytes": 1048576,
# "frame_count": 1500,
# "duration_seconds": 300.0,
# "error_message": null
# }
# }
###############################################################################
# STORAGE ENDPOINTS
###############################################################################
### Get storage statistics
GET http://localhost:8000/storage/stats
# Response: StorageStatsResponse
# {
# "base_path": "/storage",
# "total_files": 25,
# "total_size_bytes": 52428800,
# "cameras": {
# "camera1": {
# "file_count": 15,
# "total_size_bytes": 31457280
# }
# },
# "disk_usage": {
# "total": 1000000000,
# "used": 500000000,
# "free": 500000000
# }
# }
###
### Get recording files list (with filters)
POST http://localhost:8000/storage/files
Content-Type: application/json
{
"camera_name": "camera1",
"start_date": "2025-07-25T00:00:00",
"end_date": "2025-07-28T23:59:59",
"limit": 50
}
# Request Parameters (all optional):
# - camera_name: string - Filter by specific camera
# - start_date: string (ISO format) - Filter files from this date
# - end_date: string (ISO format) - Filter files until this date
# - limit: integer (max 1000, default 100) - Maximum number of files to return
#
# Response: FileListResponse
# {
# "files": [
# {
# "filename": "20250728_120000_test.avi",
# "camera_name": "camera1",
# "file_size_bytes": 1048576,
# "created_date": "2025-07-28T12:00:00",
# "duration_seconds": 300.0
# }
# ],
# "total_count": 1
# }
###
### Get all files (no camera filter)
POST http://localhost:8000/storage/files
Content-Type: application/json
{
"limit": 100
}
###
### Cleanup old storage files
POST http://localhost:8000/storage/cleanup
Content-Type: application/json
{
"max_age_days": 7
}
# Request Parameters:
# - max_age_days: integer (optional) - Remove files older than this many days
# If not provided, uses config default (30 days)
#
# Response: CleanupResponse
# {
# "files_removed": 5,
# "bytes_freed": 10485760,
# "errors": []
# }
###############################################################################
# ERROR RESPONSES
###############################################################################
# All endpoints may return ErrorResponse on failure:
# {
# "error": "Error description",
# "details": "Additional error details",
# "timestamp": "2025-07-28T12:00:00"
# }
# Common HTTP status codes:
# - 200: Success
# - 400: Bad Request (invalid parameters)
# - 404: Not Found (camera/resource not found)
# - 500: Internal Server Error
# - 503: Service Unavailable (camera manager not available)
###############################################################################
# NOTES
###############################################################################
# 1. All timestamps are in ISO 8601 format
# 2. File sizes are in bytes
# 3. Camera names: "camera1", "camera2"
# 4. Machine names: "vibratory_conveyor", "blower_separator"
# 5. FPS behavior:
# - fps > 0: Capture at specified frame rate
# - fps = 0: Capture at MAXIMUM possible speed (no delay)
# - fps omitted: Uses camera config default
# 6. Filenames automatically get datetime prefix: YYYYMMDD_HHMMSS_filename.avi
# 7. Recovery endpoints should be used in order: test-connection → reconnect → restart-grab → full-reset → reinitialize
### Start streaming for camera1
curl -X POST http://localhost:8000/cameras/camera1/start-stream
# View live stream (open in browser)
# http://localhost:8000/cameras/camera1/stream
### Stop streaming
curl -X POST http://localhost:8000/cameras/camera1/stop-stream

View File

@@ -0,0 +1,308 @@
### Get system status
GET http://localhost:8000/system/status
###
### Get camera1 status
GET http://localhost:8000/cameras/camera1/status
###
### Get camera2 status
GET http://localhost:8000/cameras/camera2/status
###
### RECORDING TESTS
### Note: All filenames will automatically have datetime prefix added
### Format: YYYYMMDD_HHMMSS_filename.avi (or auto-generated if no filename)
###
### FPS Behavior:
### - fps > 0: Capture at specified frame rate
### - fps = 0: Capture at MAXIMUM possible speed (no delay between frames)
### - fps omitted: Uses camera config default (usually 3.0 fps)
### - Video files saved with 30 FPS metadata when fps=0 for proper playback
###
### Start recording camera1 (basic)
POST http://localhost:8000/cameras/camera1/start-recording
Content-Type: application/json
{
"filename": "manual22_test_cam1.avi"
}
###
### Start recording camera1 (with camera settings)
POST http://localhost:8000/cameras/camera1/start-recording
Content-Type: application/json
{
"filename": "test_with_settings.avi",
"exposure_ms": 2.0,
"gain": 4.0,
"fps": 0
}
###
### Start recording camera2 (basic)
POST http://localhost:8000/cameras/camera2/start-recording
Content-Type: application/json
{
"filename": "manual_test_cam2.avi"
}
###
### Start recording camera2 (with different settings)
POST http://localhost:8000/cameras/camera2/start-recording
Content-Type: application/json
{
"filename": "high_fps_test.avi",
"exposure_ms": 0.5,
"gain": 2.5,
"fps": 10.0
}
###
### Start recording camera1 (no filename, only settings)
POST http://localhost:8000/cameras/camera1/start-recording
Content-Type: application/json
{
"exposure_ms": 1.5,
"gain": 3.0,
"fps": 7.0
}
###
### Start recording camera1 (only filename, no settings)
POST http://localhost:8000/cameras/camera1/start-recording
Content-Type: application/json
{
"filename": "just_filename_test.avi"
}
###
### Start recording camera2 (only exposure setting)
POST http://localhost:8000/cameras/camera2/start-recording
Content-Type: application/json
{
"exposure_ms": 3.0
}
###
### Start recording camera1 (only gain setting)
POST http://localhost:8000/cameras/camera1/start-recording
Content-Type: application/json
{
"gain": 5.5
}
###
### Start recording camera2 (only fps setting)
POST http://localhost:8000/cameras/camera2/start-recording
Content-Type: application/json
{
"fps": 15.0
}
###
### Start recording camera1 (maximum fps - no delay)
POST http://localhost:8000/cameras/camera1/start-recording
Content-Type: application/json
{
"filename": "max_fps_test.avi",
"fps": 0
}
###
### Start recording camera2 (maximum fps with settings)
POST http://localhost:8000/cameras/camera2/start-recording
Content-Type: application/json
{
"filename": "max_fps_low_exposure.avi",
"exposure_ms": 0.1,
"gain": 1.0,
"fps": 0
}
###
### Start recording camera1 (empty body - all defaults)
POST http://localhost:8000/cameras/camera1/start-recording
Content-Type: application/json
{}
###
### Stop camera1 recording
POST http://localhost:8000/cameras/camera1/stop-recording
###
### Stop camera2 recording
POST http://localhost:8000/cameras/camera2/stop-recording
###
### SYSTEM STATUS AND STORAGE TESTS
###
### Get all cameras status
GET http://localhost:8000/cameras
###
### Get storage statistics
GET http://localhost:8000/storage/stats
###
### Get storage files list
POST http://localhost:8000/storage/files
Content-Type: application/json
{
"camera_name": "camera1",
"limit": 10
}
###
### Get storage files list (all cameras)
POST http://localhost:8000/storage/files
Content-Type: application/json
{
"limit": 20
}
###
### Health check
GET http://localhost:8000/health
###
### CAMERA RECOVERY AND DIAGNOSTICS TESTS
###
### These endpoints help recover cameras that have failed to initialize or lost connection.
###
### Recovery Methods (in order of severity):
### 1. test-connection: Test if camera connection is working
### 2. reconnect: Soft reconnection using CameraReConnect()
### 3. restart-grab: Restart grab process using CameraRestartGrab()
### 4. reset-timestamp: Reset camera timestamp using CameraRstTimeStamp()
### 5. full-reset: Hard reset - uninitialize and reinitialize camera
### 6. reinitialize: Complete reinitialization for cameras that never initialized
###
### Recommended troubleshooting order:
### 1. Start with test-connection to diagnose the issue
### 2. Try reconnect first (most common fix)
### 3. If reconnect fails, try restart-grab
### 4. If still failing, try full-reset
### 5. Use reinitialize only for cameras that failed initial setup
###
### Test camera1 connection
POST http://localhost:8000/cameras/camera1/test-connection
###
### Test camera2 connection
POST http://localhost:8000/cameras/camera2/test-connection
###
### Reconnect camera1 (soft recovery)
POST http://localhost:8000/cameras/camera1/reconnect
###
### Reconnect camera2 (soft recovery)
POST http://localhost:8000/cameras/camera2/reconnect
###
### Restart camera1 grab process
POST http://localhost:8000/cameras/camera1/restart-grab
###
### Restart camera2 grab process
POST http://localhost:8000/cameras/camera2/restart-grab
###
### Reset camera1 timestamp
POST http://localhost:8000/cameras/camera1/reset-timestamp
###
### Reset camera2 timestamp
POST http://localhost:8000/cameras/camera2/reset-timestamp
###
### Full reset camera1 (hard recovery - uninitialize and reinitialize)
POST http://localhost:8000/cameras/camera1/full-reset
###
### Full reset camera2 (hard recovery - uninitialize and reinitialize)
POST http://localhost:8000/cameras/camera2/full-reset
###
### Reinitialize camera1 (for cameras that failed to initialize)
POST http://localhost:8000/cameras/camera1/reinitialize
###
### Reinitialize camera2 (for cameras that failed to initialize)
POST http://localhost:8000/cameras/camera2/reinitialize
###
### RECOVERY WORKFLOW EXAMPLES
###
### Example 1: Basic troubleshooting workflow for camera1
### Step 1: Test connection
POST http://localhost:8000/cameras/camera1/test-connection
### Step 2: If test fails, try reconnect
# POST http://localhost:8000/cameras/camera1/reconnect
### Step 3: If reconnect fails, try restart grab
# POST http://localhost:8000/cameras/camera1/restart-grab
### Step 4: If still failing, try full reset
# POST http://localhost:8000/cameras/camera1/full-reset
### Step 5: If camera never initialized, try reinitialize
# POST http://localhost:8000/cameras/camera1/reinitialize
###
### Example 2: Quick recovery sequence for camera2
### Try reconnect first (most common fix)
POST http://localhost:8000/cameras/camera2/reconnect
### If that doesn't work, try full reset
# POST http://localhost:8000/cameras/camera2/full-reset

View File

@@ -0,0 +1,367 @@
/**
* TypeScript definitions for USDA Vision Camera System API
*
* This file provides complete type definitions for AI assistants
* to integrate the camera streaming functionality into React/TypeScript projects.
*/
// =============================================================================
// BASE CONFIGURATION
// =============================================================================
export interface ApiConfig {
baseUrl: string;
timeout?: number;
refreshInterval?: number;
}
export const defaultApiConfig: ApiConfig = {
baseUrl: 'http://vision:8000', // Production default, change to 'http://localhost:8000' for development
timeout: 10000,
refreshInterval: 30000,
};
// =============================================================================
// CAMERA TYPES
// =============================================================================
export interface CameraDeviceInfo {
friendly_name?: string;
port_type?: string;
serial_number?: string;
device_index?: number;
error?: string;
}
export interface CameraInfo {
name: string;
status: 'connected' | 'disconnected' | 'error' | 'not_found' | 'available';
is_recording: boolean;
last_checked: string; // ISO date string
last_error?: string | null;
device_info?: CameraDeviceInfo;
current_recording_file?: string | null;
recording_start_time?: string | null; // ISO date string
}
export interface CameraListResponse {
[cameraName: string]: CameraInfo;
}
// =============================================================================
// STREAMING TYPES
// =============================================================================
export interface StreamStartRequest {
// No body required - camera name is in URL path
}
export interface StreamStartResponse {
success: boolean;
message: string;
}
export interface StreamStopRequest {
// No body required - camera name is in URL path
}
export interface StreamStopResponse {
success: boolean;
message: string;
}
export interface StreamStatus {
isStreaming: boolean;
streamUrl?: string;
error?: string;
}
// =============================================================================
// RECORDING TYPES
// =============================================================================
export interface StartRecordingRequest {
filename?: string;
exposure_ms?: number;
gain?: number;
fps?: number;
}
export interface StartRecordingResponse {
success: boolean;
message: string;
filename?: string;
}
export interface StopRecordingResponse {
success: boolean;
message: string;
}
// =============================================================================
// SYSTEM TYPES
// =============================================================================
export interface SystemStatusResponse {
status: string;
uptime: string;
api_server_running: boolean;
camera_manager_running: boolean;
mqtt_client_connected: boolean;
total_cameras: number;
active_recordings: number;
active_streams?: number;
}
export interface HealthResponse {
status: 'healthy' | 'unhealthy';
timestamp: string;
}
// =============================================================================
// ERROR TYPES
// =============================================================================
export interface ApiError {
detail: string;
status_code?: number;
}
export interface StreamError extends Error {
type: 'network' | 'api' | 'stream' | 'timeout';
cameraName: string;
originalError?: Error;
}
// =============================================================================
// HOOK TYPES
// =============================================================================
export interface UseCameraStreamResult {
isStreaming: boolean;
loading: boolean;
error: string | null;
startStream: () => Promise<{ success: boolean; error?: string }>;
stopStream: () => Promise<{ success: boolean; error?: string }>;
getStreamUrl: () => string;
refreshStream: () => void;
}
export interface UseCameraListResult {
cameras: CameraListResponse;
loading: boolean;
error: string | null;
refreshCameras: () => Promise<void>;
}
export interface UseCameraRecordingResult {
isRecording: boolean;
loading: boolean;
error: string | null;
currentFile: string | null;
startRecording: (options?: StartRecordingRequest) => Promise<{ success: boolean; error?: string }>;
stopRecording: () => Promise<{ success: boolean; error?: string }>;
}
// =============================================================================
// COMPONENT PROPS TYPES
// =============================================================================
export interface CameraStreamProps {
cameraName: string;
apiConfig?: ApiConfig;
autoStart?: boolean;
onStreamStart?: (cameraName: string) => void;
onStreamStop?: (cameraName: string) => void;
onError?: (error: StreamError) => void;
className?: string;
style?: React.CSSProperties;
}
export interface CameraDashboardProps {
apiConfig?: ApiConfig;
cameras?: string[]; // If provided, only show these cameras
showRecordingControls?: boolean;
showStreamingControls?: boolean;
refreshInterval?: number;
onCameraSelect?: (cameraName: string) => void;
className?: string;
}
export interface CameraControlsProps {
cameraName: string;
apiConfig?: ApiConfig;
showRecording?: boolean;
showStreaming?: boolean;
onAction?: (action: 'start-stream' | 'stop-stream' | 'start-recording' | 'stop-recording', cameraName: string) => void;
}
// =============================================================================
// API CLIENT TYPES
// =============================================================================
export interface CameraApiClient {
// System endpoints
getHealth(): Promise<HealthResponse>;
getSystemStatus(): Promise<SystemStatusResponse>;
// Camera endpoints
getCameras(): Promise<CameraListResponse>;
getCameraStatus(cameraName: string): Promise<CameraInfo>;
testCameraConnection(cameraName: string): Promise<{ success: boolean; message: string }>;
// Streaming endpoints
startStream(cameraName: string): Promise<StreamStartResponse>;
stopStream(cameraName: string): Promise<StreamStopResponse>;
getStreamUrl(cameraName: string): string;
// Recording endpoints
startRecording(cameraName: string, options?: StartRecordingRequest): Promise<StartRecordingResponse>;
stopRecording(cameraName: string): Promise<StopRecordingResponse>;
}
// =============================================================================
// UTILITY TYPES
// =============================================================================
export type CameraAction = 'start-stream' | 'stop-stream' | 'start-recording' | 'stop-recording' | 'test-connection';
export interface CameraActionResult {
success: boolean;
message: string;
error?: string;
}
export interface StreamingState {
[cameraName: string]: {
isStreaming: boolean;
isLoading: boolean;
error: string | null;
lastStarted?: Date;
};
}
export interface RecordingState {
[cameraName: string]: {
isRecording: boolean;
isLoading: boolean;
error: string | null;
currentFile: string | null;
startTime?: Date;
};
}
// =============================================================================
// EVENT TYPES
// =============================================================================
export interface CameraEvent {
type: 'stream-started' | 'stream-stopped' | 'stream-error' | 'recording-started' | 'recording-stopped' | 'recording-error';
cameraName: string;
timestamp: Date;
data?: any;
}
export type CameraEventHandler = (event: CameraEvent) => void;
// =============================================================================
// CONFIGURATION TYPES
// =============================================================================
export interface StreamConfig {
fps: number;
quality: number; // 1-100
timeout: number;
retryAttempts: number;
retryDelay: number;
}
export interface CameraStreamConfig extends StreamConfig {
cameraName: string;
autoReconnect: boolean;
maxReconnectAttempts: number;
}
// =============================================================================
// CONTEXT TYPES (for React Context)
// =============================================================================
export interface CameraContextValue {
cameras: CameraListResponse;
streamingState: StreamingState;
recordingState: RecordingState;
apiClient: CameraApiClient;
// Actions
startStream: (cameraName: string) => Promise<CameraActionResult>;
stopStream: (cameraName: string) => Promise<CameraActionResult>;
startRecording: (cameraName: string, options?: StartRecordingRequest) => Promise<CameraActionResult>;
stopRecording: (cameraName: string) => Promise<CameraActionResult>;
refreshCameras: () => Promise<void>;
// State
loading: boolean;
error: string | null;
}
// =============================================================================
// EXAMPLE USAGE TYPES
// =============================================================================
/**
* Example usage in React component:
*
* ```typescript
* import { CameraStreamProps, UseCameraStreamResult } from './camera-api.types';
*
* const CameraStream: React.FC<CameraStreamProps> = ({
* cameraName,
* apiConfig = defaultApiConfig,
* autoStart = false,
* onStreamStart,
* onStreamStop,
* onError
* }) => {
* const {
* isStreaming,
* loading,
* error,
* startStream,
* stopStream,
* getStreamUrl
* }: UseCameraStreamResult = useCameraStream(cameraName, apiConfig);
*
* // Component implementation...
* };
* ```
*/
/**
* Example API client usage:
*
* ```typescript
* const apiClient: CameraApiClient = new CameraApiClientImpl(defaultApiConfig);
*
* // Start streaming
* const result = await apiClient.startStream('camera1');
* if (result.success) {
* const streamUrl = apiClient.getStreamUrl('camera1');
* // Use streamUrl in img tag
* }
* ```
*/
/**
* Example hook usage:
*
* ```typescript
* const MyComponent = () => {
* const { cameras, loading, error, refreshCameras } = useCameraList();
* const { isStreaming, startStream, stopStream } = useCameraStream('camera1');
*
* // Component logic...
* };
* ```
*/
export default {};

View File

@@ -0,0 +1,543 @@
### USDA Vision Camera Streaming API
###
### CONFIGURATION:
### - Production: http://vision:8000 (requires hostname setup)
### - Development: http://localhost:8000
### - Custom: Update @baseUrl below to match your setup
###
### This file contains streaming-specific API endpoints for live camera preview
### Use with VS Code REST Client extension or similar tools.
# Base URL - Update to match your configuration
@baseUrl = http://vision:8000
# Alternative: @baseUrl = http://localhost:8000
### =============================================================================
### STREAMING ENDPOINTS (NEW FUNCTIONALITY)
### =============================================================================
### Start camera streaming for live preview
### This creates a separate camera connection that doesn't interfere with recording
POST {{baseUrl}}/cameras/camera1/start-stream
Content-Type: application/json
### Expected Response:
# {
# "success": true,
# "message": "Started streaming for camera camera1"
# }
###
### Stop camera streaming
POST {{baseUrl}}/cameras/camera1/stop-stream
Content-Type: application/json
### Expected Response:
# {
# "success": true,
# "message": "Stopped streaming for camera camera1"
# }
###
### Get live MJPEG stream (open in browser or use as img src)
### This endpoint returns a continuous MJPEG stream
### Content-Type: multipart/x-mixed-replace; boundary=frame
GET {{baseUrl}}/cameras/camera1/stream
### Usage in HTML:
# <img src="http://localhost:8000/cameras/camera1/stream" alt="Live Stream" />
### Usage in React:
# <img src={`${apiBaseUrl}/cameras/${cameraName}/stream?t=${Date.now()}`} />
###
### Start streaming for camera2
POST {{baseUrl}}/cameras/camera2/start-stream
Content-Type: application/json
###
### Get live stream for camera2
GET {{baseUrl}}/cameras/camera2/stream
###
### Stop streaming for camera2
POST {{baseUrl}}/cameras/camera2/stop-stream
Content-Type: application/json
### =============================================================================
### CONCURRENT OPERATIONS TESTING
### =============================================================================
### Test Scenario: Streaming + Recording Simultaneously
### This demonstrates that streaming doesn't block recording
### Step 1: Start streaming first
POST {{baseUrl}}/cameras/camera1/start-stream
Content-Type: application/json
###
### Step 2: Start recording (while streaming continues)
POST {{baseUrl}}/cameras/camera1/start-recording
Content-Type: application/json
{
"filename": "concurrent_test.avi"
}
###
### Step 3: Check both are running
GET {{baseUrl}}/cameras/camera1
### Expected Response shows both recording and streaming active:
# {
# "camera1": {
# "name": "camera1",
# "status": "connected",
# "is_recording": true,
# "current_recording_file": "concurrent_test.avi",
# "recording_start_time": "2025-01-28T10:30:00.000Z"
# }
# }
###
### Step 4: Stop recording (streaming continues)
POST {{baseUrl}}/cameras/camera1/stop-recording
Content-Type: application/json
###
### Step 5: Verify streaming still works
GET {{baseUrl}}/cameras/camera1/stream
###
### Step 6: Stop streaming
POST {{baseUrl}}/cameras/camera1/stop-stream
Content-Type: application/json
### =============================================================================
### MULTIPLE CAMERA STREAMING
### =============================================================================
### Start streaming on multiple cameras simultaneously
POST {{baseUrl}}/cameras/camera1/start-stream
Content-Type: application/json
###
POST {{baseUrl}}/cameras/camera2/start-stream
Content-Type: application/json
###
### Check status of all cameras
GET {{baseUrl}}/cameras
###
### Access multiple streams (open in separate browser tabs)
GET {{baseUrl}}/cameras/camera1/stream
###
GET {{baseUrl}}/cameras/camera2/stream
###
### Stop all streaming
POST {{baseUrl}}/cameras/camera1/stop-stream
Content-Type: application/json
###
POST {{baseUrl}}/cameras/camera2/stop-stream
Content-Type: application/json
### =============================================================================
### ERROR TESTING
### =============================================================================
### Test with invalid camera name
POST {{baseUrl}}/cameras/invalid_camera/start-stream
Content-Type: application/json
### Expected Response:
# {
# "detail": "Camera streamer not found: invalid_camera"
# }
###
### Test stream endpoint without starting stream first
GET {{baseUrl}}/cameras/camera1/stream
### Expected: May return error or empty stream depending on camera state
###
### Test starting stream when camera is in error state
POST {{baseUrl}}/cameras/camera1/start-stream
Content-Type: application/json
### If camera has issues, expected response:
# {
# "success": false,
# "message": "Failed to start streaming for camera camera1"
# }
### =============================================================================
### INTEGRATION EXAMPLES FOR AI ASSISTANTS
### =============================================================================
### React Component Integration:
# const CameraStream = ({ cameraName }) => {
# const [isStreaming, setIsStreaming] = useState(false);
#
# const startStream = async () => {
# const response = await fetch(`${baseUrl}/cameras/${cameraName}/start-stream`, {
# method: 'POST'
# });
# if (response.ok) {
# setIsStreaming(true);
# }
# };
#
# return (
# <div>
# <button onClick={startStream}>Start Stream</button>
# {isStreaming && (
# <img src={`${baseUrl}/cameras/${cameraName}/stream?t=${Date.now()}`} />
# )}
# </div>
# );
# };
### JavaScript Fetch Example:
# const streamAPI = {
# async startStream(cameraName) {
# const response = await fetch(`${baseUrl}/cameras/${cameraName}/start-stream`, {
# method: 'POST',
# headers: { 'Content-Type': 'application/json' }
# });
# return response.json();
# },
#
# async stopStream(cameraName) {
# const response = await fetch(`${baseUrl}/cameras/${cameraName}/stop-stream`, {
# method: 'POST',
# headers: { 'Content-Type': 'application/json' }
# });
# return response.json();
# },
#
# getStreamUrl(cameraName) {
# return `${baseUrl}/cameras/${cameraName}/stream?t=${Date.now()}`;
# }
# };
### Vue.js Integration:
# <template>
# <div>
# <button @click="startStream">Start Stream</button>
# <img v-if="isStreaming" :src="streamUrl" />
# </div>
# </template>
#
# <script>
# export default {
# data() {
# return {
# isStreaming: false,
# cameraName: 'camera1'
# };
# },
# computed: {
# streamUrl() {
# return `${this.baseUrl}/cameras/${this.cameraName}/stream?t=${Date.now()}`;
# }
# },
# methods: {
# async startStream() {
# const response = await fetch(`${this.baseUrl}/cameras/${this.cameraName}/start-stream`, {
# method: 'POST'
# });
# if (response.ok) {
# this.isStreaming = true;
# }
# }
# }
# };
# </script>
### =============================================================================
### TROUBLESHOOTING
### =============================================================================
### If streams don't start:
# 1. Check camera status: GET /cameras
# 2. Verify system health: GET /health
# 3. Test camera connection: POST /cameras/{name}/test-connection
# 4. Check if camera is already recording (shouldn't matter, but good to know)
### If stream image doesn't load:
# 1. Verify stream was started: POST /cameras/{name}/start-stream
# 2. Check browser console for CORS errors
# 3. Try accessing stream URL directly in browser
# 4. Add timestamp to prevent caching: ?t=${Date.now()}
### If concurrent operations fail:
# 1. This should work - streaming and recording use separate connections
# 2. Check system logs for resource conflicts
# 3. Verify sufficient system resources (CPU/Memory)
# 4. Test with one camera first, then multiple
### Performance Notes:
# - Streaming uses ~10 FPS by default (configurable)
# - JPEG quality set to 70% (configurable)
# - Each stream uses additional CPU/memory
# - Multiple concurrent streams may impact performance
### =============================================================================
### CAMERA CONFIGURATION ENDPOINTS (NEW)
### =============================================================================
### Get camera configuration
GET {{baseUrl}}/cameras/camera1/config
### Expected Response:
# {
# "name": "camera1",
# "machine_topic": "vibratory_conveyor",
# "storage_path": "/storage/camera1",
# "enabled": true,
# "auto_start_recording_enabled": true,
# "auto_recording_max_retries": 3,
# "auto_recording_retry_delay_seconds": 2,
# "exposure_ms": 1.0,
# "gain": 3.5,
# "target_fps": 0,
# "sharpness": 120,
# "contrast": 110,
# "saturation": 100,
# "gamma": 100,
# "noise_filter_enabled": true,
# "denoise_3d_enabled": false,
# "auto_white_balance": true,
# "color_temperature_preset": 0,
# "wb_red_gain": 1.0,
# "wb_green_gain": 1.0,
# "wb_blue_gain": 1.0,
# "anti_flicker_enabled": true,
# "light_frequency": 1,
# "bit_depth": 8,
# "hdr_enabled": false,
# "hdr_gain_mode": 0
# }
###
### Update basic camera settings (real-time, no restart required)
PUT {{baseUrl}}/cameras/camera1/config
Content-Type: application/json
{
"exposure_ms": 2.0,
"gain": 4.0,
"target_fps": 10.0
}
###
### Update image quality settings
PUT {{baseUrl}}/cameras/camera1/config
Content-Type: application/json
{
"sharpness": 150,
"contrast": 120,
"saturation": 110,
"gamma": 90
}
###
### Update advanced settings
PUT {{baseUrl}}/cameras/camera1/config
Content-Type: application/json
{
"anti_flicker_enabled": true,
"light_frequency": 1,
"auto_white_balance": false,
"color_temperature_preset": 2
}
###
### Update white balance RGB gains (manual white balance)
PUT {{baseUrl}}/cameras/camera1/config
Content-Type: application/json
{
"auto_white_balance": false,
"wb_red_gain": 1.2,
"wb_green_gain": 1.0,
"wb_blue_gain": 0.8
}
###
### Enable HDR mode
PUT {{baseUrl}}/cameras/camera1/config
Content-Type: application/json
{
"hdr_enabled": true,
"hdr_gain_mode": 1
}
###
### Update noise reduction settings (requires restart)
PUT {{baseUrl}}/cameras/camera1/config
Content-Type: application/json
{
"noise_filter_enabled": false,
"denoise_3d_enabled": true
}
###
### Apply configuration (restart camera with new settings)
POST {{baseUrl}}/cameras/camera1/apply-config
### Expected Response:
# {
# "success": true,
# "message": "Configuration applied to camera camera1"
# }
###
### Get camera2 configuration
GET {{baseUrl}}/cameras/camera2/config
###
### Update camera2 for outdoor lighting
PUT {{baseUrl}}/cameras/camera2/config
Content-Type: application/json
{
"exposure_ms": 0.5,
"gain": 2.0,
"sharpness": 130,
"contrast": 115,
"anti_flicker_enabled": true,
"light_frequency": 1
}
### =============================================================================
### CONFIGURATION TESTING SCENARIOS
### =============================================================================
### Scenario 1: Low light optimization
PUT {{baseUrl}}/cameras/camera1/config
Content-Type: application/json
{
"exposure_ms": 5.0,
"gain": 8.0,
"noise_filter_enabled": true,
"denoise_3d_enabled": true
}
###
### Scenario 2: High speed capture
PUT {{baseUrl}}/cameras/camera1/config
Content-Type: application/json
{
"exposure_ms": 0.2,
"gain": 1.0,
"target_fps": 30.0,
"sharpness": 180
}
###
### Scenario 3: Color accuracy for food inspection
PUT {{baseUrl}}/cameras/camera1/config
Content-Type: application/json
{
"auto_white_balance": false,
"color_temperature_preset": 1,
"saturation": 120,
"contrast": 105,
"gamma": 95
}
###
### Scenario 4: HDR for high contrast scenes
PUT {{baseUrl}}/cameras/camera1/config
Content-Type: application/json
{
"hdr_enabled": true,
"hdr_gain_mode": 2,
"exposure_ms": 1.0,
"gain": 3.0
}
### =============================================================================
### ERROR TESTING FOR CONFIGURATION
### =============================================================================
### Test invalid camera name
GET {{baseUrl}}/cameras/invalid_camera/config
###
### Test invalid exposure range
PUT {{baseUrl}}/cameras/camera1/config
Content-Type: application/json
{
"exposure_ms": 2000.0
}
### Expected: HTTP 422 validation error
###
### Test invalid gain range
PUT {{baseUrl}}/cameras/camera1/config
Content-Type: application/json
{
"gain": 50.0
}
### Expected: HTTP 422 validation error
###
### Test empty configuration update
PUT {{baseUrl}}/cameras/camera1/config
Content-Type: application/json
{}
### Expected: HTTP 400 "No configuration updates provided"

336
api/camera_preview.html Normal file
View File

@@ -0,0 +1,336 @@
<!DOCTYPE html>
<html lang="en">
<head>
<meta charset="UTF-8">
<meta name="viewport" content="width=device-width, initial-scale=1.0">
<title>USDA Vision Camera Live Preview</title>
<style>
body {
font-family: Arial, sans-serif;
margin: 0;
padding: 20px;
background-color: #f5f5f5;
}
.container {
max-width: 1200px;
margin: 0 auto;
background-color: white;
padding: 20px;
border-radius: 8px;
box-shadow: 0 2px 10px rgba(0,0,0,0.1);
}
h1 {
color: #333;
text-align: center;
margin-bottom: 30px;
}
.camera-grid {
display: grid;
grid-template-columns: repeat(auto-fit, minmax(400px, 1fr));
gap: 20px;
margin-bottom: 30px;
}
.camera-card {
border: 1px solid #ddd;
border-radius: 8px;
padding: 15px;
background-color: #fafafa;
}
.camera-title {
font-size: 18px;
font-weight: bold;
margin-bottom: 10px;
color: #333;
}
.camera-stream {
width: 100%;
max-width: 100%;
height: auto;
border: 2px solid #ddd;
border-radius: 4px;
background-color: #000;
min-height: 200px;
display: block;
}
.camera-controls {
margin-top: 10px;
display: flex;
gap: 10px;
flex-wrap: wrap;
}
.btn {
padding: 8px 16px;
border: none;
border-radius: 4px;
cursor: pointer;
font-size: 14px;
transition: background-color 0.3s;
}
.btn-primary {
background-color: #007bff;
color: white;
}
.btn-primary:hover {
background-color: #0056b3;
}
.btn-secondary {
background-color: #6c757d;
color: white;
}
.btn-secondary:hover {
background-color: #545b62;
}
.btn-success {
background-color: #28a745;
color: white;
}
.btn-success:hover {
background-color: #1e7e34;
}
.btn-danger {
background-color: #dc3545;
color: white;
}
.btn-danger:hover {
background-color: #c82333;
}
.status {
margin-top: 10px;
padding: 8px;
border-radius: 4px;
font-size: 14px;
}
.status-success {
background-color: #d4edda;
color: #155724;
border: 1px solid #c3e6cb;
}
.status-error {
background-color: #f8d7da;
color: #721c24;
border: 1px solid #f5c6cb;
}
.status-info {
background-color: #d1ecf1;
color: #0c5460;
border: 1px solid #bee5eb;
}
.system-info {
margin-top: 30px;
padding: 15px;
background-color: #e9ecef;
border-radius: 4px;
}
.system-info h3 {
margin-top: 0;
color: #495057;
}
.api-info {
font-family: monospace;
font-size: 12px;
color: #6c757d;
}
</style>
</head>
<body>
<div class="container">
<h1>🎥 USDA Vision Camera Live Preview</h1>
<div class="camera-grid" id="cameraGrid">
<!-- Camera cards will be dynamically generated -->
</div>
<div class="system-info">
<h3>📡 System Information</h3>
<div id="systemStatus">Loading system status...</div>
<h3>🔗 API Endpoints</h3>
<div class="api-info">
<p><strong>Live Stream:</strong> GET /cameras/{camera_name}/stream</p>
<p><strong>Start Stream:</strong> POST /cameras/{camera_name}/start-stream</p>
<p><strong>Stop Stream:</strong> POST /cameras/{camera_name}/stop-stream</p>
<p><strong>Camera Status:</strong> GET /cameras</p>
</div>
</div>
</div>
<script>
const API_BASE = 'http://vision:8000';
let cameras = {};
// Initialize the page
async function init() {
await loadCameras();
await loadSystemStatus();
// Refresh status every 5 seconds
setInterval(loadSystemStatus, 5000);
}
// Load camera information
async function loadCameras() {
try {
const response = await fetch(`${API_BASE}/cameras`);
const data = await response.json();
cameras = data;
renderCameras();
} catch (error) {
console.error('Error loading cameras:', error);
showError('Failed to load camera information');
}
}
// Load system status
async function loadSystemStatus() {
try {
const response = await fetch(`${API_BASE}/system/status`);
const data = await response.json();
const statusDiv = document.getElementById('systemStatus');
statusDiv.innerHTML = `
<p><strong>System:</strong> ${data.status}</p>
<p><strong>Uptime:</strong> ${data.uptime}</p>
<p><strong>API Server:</strong> ${data.api_server_running ? '✅ Running' : '❌ Stopped'}</p>
<p><strong>Camera Manager:</strong> ${data.camera_manager_running ? '✅ Running' : '❌ Stopped'}</p>
<p><strong>MQTT Client:</strong> ${data.mqtt_client_connected ? '✅ Connected' : '❌ Disconnected'}</p>
`;
} catch (error) {
console.error('Error loading system status:', error);
document.getElementById('systemStatus').innerHTML = '<p style="color: red;">Failed to load system status</p>';
}
}
// Render camera cards
function renderCameras() {
const grid = document.getElementById('cameraGrid');
grid.innerHTML = '';
for (const [cameraName, cameraInfo] of Object.entries(cameras)) {
const card = createCameraCard(cameraName, cameraInfo);
grid.appendChild(card);
}
}
// Create a camera card
function createCameraCard(cameraName, cameraInfo) {
const card = document.createElement('div');
card.className = 'camera-card';
card.innerHTML = `
<div class="camera-title">${cameraName}</div>
<img class="camera-stream" id="stream-${cameraName}"
src="data:image/svg+xml;base64,PHN2ZyB3aWR0aD0iNDAwIiBoZWlnaHQ9IjIwMCIgeG1sbnM9Imh0dHA6Ly93d3cudzMub3JnLzIwMDAvc3ZnIj48cmVjdCB3aWR0aD0iMTAwJSIgaGVpZ2h0PSIxMDAlIiBmaWxsPSIjZGRkIi8+PHRleHQgeD0iNTAlIiB5PSI1MCUiIGZvbnQtZmFtaWx5PSJBcmlhbCIgZm9udC1zaXplPSIxNCIgZmlsbD0iIzk5OSIgdGV4dC1hbmNob3I9Im1pZGRsZSIgZHk9Ii4zZW0iPk5vIFN0cmVhbTwvdGV4dD48L3N2Zz4="
alt="Camera Stream">
<div class="camera-controls">
<button class="btn btn-success" onclick="startStream('${cameraName}')">Start Stream</button>
<button class="btn btn-danger" onclick="stopStream('${cameraName}')">Stop Stream</button>
<button class="btn btn-secondary" onclick="refreshStream('${cameraName}')">Refresh</button>
</div>
<div class="status status-info" id="status-${cameraName}">
Status: ${cameraInfo.status} | Recording: ${cameraInfo.is_recording ? 'Yes' : 'No'}
</div>
`;
return card;
}
// Start streaming for a camera
async function startStream(cameraName) {
try {
updateStatus(cameraName, 'Starting stream...', 'info');
// Start the stream
const response = await fetch(`${API_BASE}/cameras/${cameraName}/start-stream`, {
method: 'POST'
});
if (response.ok) {
// Set the stream source
const streamImg = document.getElementById(`stream-${cameraName}`);
streamImg.src = `${API_BASE}/cameras/${cameraName}/stream?t=${Date.now()}`;
updateStatus(cameraName, 'Stream started successfully', 'success');
} else {
const error = await response.text();
updateStatus(cameraName, `Failed to start stream: ${error}`, 'error');
}
} catch (error) {
console.error('Error starting stream:', error);
updateStatus(cameraName, `Error starting stream: ${error.message}`, 'error');
}
}
// Stop streaming for a camera
async function stopStream(cameraName) {
try {
updateStatus(cameraName, 'Stopping stream...', 'info');
const response = await fetch(`${API_BASE}/cameras/${cameraName}/stop-stream`, {
method: 'POST'
});
if (response.ok) {
// Clear the stream source
const streamImg = document.getElementById(`stream-${cameraName}`);
streamImg.src = "data:image/svg+xml;base64,PHN2ZyB3aWR0aD0iNDAwIiBoZWlnaHQ9IjIwMCIgeG1sbnM9Imh0dHA6Ly93d3cudzMub3JnLzIwMDAvc3ZnIj48cmVjdCB3aWR0aD0iMTAwJSIgaGVpZ2h0PSIxMDAlIiBmaWxsPSIjZGRkIi8+PHRleHQgeD0iNTAlIiB5PSI1MCUiIGZvbnQtZmFtaWx5PSJBcmlhbCIgZm9udC1zaXplPSIxNCIgZmlsbD0iIzk5OSIgdGV4dC1hbmNob3I9Im1pZGRsZSIgZHk9Ii4zZW0iPk5vIFN0cmVhbTwvdGV4dD48L3N2Zz4=";
updateStatus(cameraName, 'Stream stopped successfully', 'success');
} else {
const error = await response.text();
updateStatus(cameraName, `Failed to stop stream: ${error}`, 'error');
}
} catch (error) {
console.error('Error stopping stream:', error);
updateStatus(cameraName, `Error stopping stream: ${error.message}`, 'error');
}
}
// Refresh stream for a camera
function refreshStream(cameraName) {
const streamImg = document.getElementById(`stream-${cameraName}`);
if (streamImg.src.includes('/stream')) {
streamImg.src = `${API_BASE}/cameras/${cameraName}/stream?t=${Date.now()}`;
updateStatus(cameraName, 'Stream refreshed', 'info');
} else {
updateStatus(cameraName, 'No active stream to refresh', 'error');
}
}
// Update status message
function updateStatus(cameraName, message, type) {
const statusDiv = document.getElementById(`status-${cameraName}`);
statusDiv.className = `status status-${type}`;
statusDiv.textContent = message;
}
// Show error message
function showError(message) {
alert(`Error: ${message}`);
}
// Initialize when page loads
document.addEventListener('DOMContentLoaded', init);
</script>
</body>
</html>

66
api/camera_sdk/README.md Normal file
View File

@@ -0,0 +1,66 @@
# Camera SDK Library
This directory contains the core GigE camera SDK library required for the USDA Vision Camera System.
## Contents
### Core SDK Library
- **`mvsdk.py`** - Python wrapper for the GigE camera SDK
- Provides Python bindings for camera control functions
- Handles camera initialization, configuration, and image capture
- **Critical dependency** - Required for all camera operations
## Important Notes
⚠️ **This is NOT demo code** - This directory contains the core SDK library that the entire system depends on for camera functionality.
### SDK Library Details
- The `mvsdk.py` file is a Python wrapper around the native camera SDK
- It provides ctypes bindings to the underlying C/C++ camera library
- Contains all camera control functions, constants, and data structures
- Used by all camera modules in `usda_vision_system/camera/`
### Dependencies
- Requires the native camera SDK library (`libMVSDK.so` on Linux)
- The native library should be installed system-wide or available in the library path
## Usage
This SDK is automatically imported by the camera modules:
```python
# Imported by camera modules
import sys
import os
sys.path.append(os.path.join(os.path.dirname(__file__), "..", "..", "camera_sdk"))
import mvsdk
```
## Demo Code
For camera usage examples and demo code, see the `../demos/` directory:
- `cv_grab.py` - Basic camera capture example
- `cv_grab2.py` - Multi-camera capture example
- `cv_grab_callback.py` - Callback-based capture example
- `grab.py` - Simple image capture example
## Troubleshooting
If you encounter camera SDK issues:
1. **Check SDK Installation**:
```bash
ls -la camera_sdk/mvsdk.py
```
2. **Test SDK Import**:
```bash
python -c "import sys; sys.path.append('./camera_sdk'); import mvsdk; print('SDK imported successfully')"
```
3. **Check Native Library**:
```bash
# On Linux
ldconfig -p | grep MVSDK
```
For more troubleshooting, see the main [README.md](../README.md#troubleshooting).

2454
api/camera_sdk/mvsdk.py Normal file

File diff suppressed because it is too large Load Diff

92
api/config.json Normal file
View File

@@ -0,0 +1,92 @@
{
"mqtt": {
"broker_host": "192.168.1.110",
"broker_port": 1883,
"username": null,
"password": null,
"topics": {
"vibratory_conveyor": "vision/vibratory_conveyor/state",
"blower_separator": "vision/blower_separator/state"
}
},
"storage": {
"base_path": "/storage",
"max_file_size_mb": 1000,
"max_recording_duration_minutes": 60,
"cleanup_older_than_days": 30
},
"system": {
"camera_check_interval_seconds": 2,
"log_level": "DEBUG",
"log_file": "usda_vision_system.log",
"api_host": "0.0.0.0",
"api_port": 8000,
"enable_api": true,
"timezone": "America/New_York",
"auto_recording_enabled": true
},
"cameras": [
{
"name": "camera1",
"machine_topic": "blower_separator",
"storage_path": "/storage/camera1",
"exposure_ms": 0.3,
"gain": 4.0,
"target_fps": 0,
"enabled": true,
"video_format": "mp4",
"video_codec": "mp4v",
"video_quality": 95,
"auto_start_recording_enabled": true,
"auto_recording_max_retries": 3,
"auto_recording_retry_delay_seconds": 2,
"sharpness": 0,
"contrast": 100,
"saturation": 100,
"gamma": 100,
"noise_filter_enabled": false,
"denoise_3d_enabled": false,
"auto_white_balance": false,
"color_temperature_preset": 0,
"wb_red_gain": 0.94,
"wb_green_gain": 1.0,
"wb_blue_gain": 0.87,
"anti_flicker_enabled": false,
"light_frequency": 0,
"bit_depth": 8,
"hdr_enabled": false,
"hdr_gain_mode": 2
},
{
"name": "camera2",
"machine_topic": "vibratory_conveyor",
"storage_path": "/storage/camera2",
"exposure_ms": 0.2,
"gain": 2.0,
"target_fps": 0,
"enabled": true,
"video_format": "mp4",
"video_codec": "mp4v",
"video_quality": 95,
"auto_start_recording_enabled": true,
"auto_recording_max_retries": 3,
"auto_recording_retry_delay_seconds": 2,
"sharpness": 0,
"contrast": 100,
"saturation": 100,
"gamma": 100,
"noise_filter_enabled": false,
"denoise_3d_enabled": false,
"auto_white_balance": false,
"color_temperature_preset": 0,
"wb_red_gain": 1.01,
"wb_green_gain": 1.0,
"wb_blue_gain": 0.87,
"anti_flicker_enabled": false,
"light_frequency": 0,
"bit_depth": 8,
"hdr_enabled": false,
"hdr_gain_mode": 0
}
]
}

29
api/container_init.sh Executable file
View File

@@ -0,0 +1,29 @@
#!/bin/bash
# Container initialization script for USDA Vision Camera System
# This script sets up and starts the systemd service in a container environment
echo "🐳 Container Init - USDA Vision Camera System"
echo "============================================="
# Start systemd if not already running (for containers)
if ! pgrep systemd > /dev/null; then
echo "🔧 Starting systemd..."
exec /sbin/init &
sleep 5
fi
# Setup the service if not already installed
if [ ! -f "/etc/systemd/system/usda-vision-camera.service" ]; then
echo "📦 Setting up USDA Vision Camera service..."
cd /home/alireza/USDA-vision-cameras
sudo ./setup_service.sh
fi
# Start the service
echo "🚀 Starting USDA Vision Camera service..."
sudo systemctl start usda-vision-camera
# Follow the logs
echo "📋 Following service logs (Ctrl+C to exit)..."
sudo journalctl -u usda-vision-camera -f

182
api/convert_avi_to_mp4.sh Executable file
View File

@@ -0,0 +1,182 @@
#!/bin/bash
# Script to convert AVI files to MP4 using H.264 codec
# Converts files in /storage directory and saves them in the same location
# Colors for output
RED='\033[0;31m'
GREEN='\033[0;32m'
YELLOW='\033[1;33m'
BLUE='\033[0;34m'
NC='\033[0m' # No Color
# Function to print colored output
print_status() {
echo -e "${BLUE}[INFO]${NC} $1"
}
print_success() {
echo -e "${GREEN}[SUCCESS]${NC} $1"
}
print_warning() {
echo -e "${YELLOW}[WARNING]${NC} $1"
}
print_error() {
echo -e "${RED}[ERROR]${NC} $1"
}
# Function to get video duration in seconds
get_duration() {
local file="$1"
ffprobe -v quiet -show_entries format=duration -of default=noprint_wrappers=1:nokey=1 "$file" 2>/dev/null | cut -d. -f1
}
# Function to show progress bar
show_progress() {
local current=$1
local total=$2
local width=50
local percentage=$((current * 100 / total))
local filled=$((current * width / total))
local empty=$((width - filled))
printf "\r["
printf "%*s" $filled | tr ' ' '='
printf "%*s" $empty | tr ' ' '-'
printf "] %d%% (%ds/%ds)" $percentage $current $total
}
# Check if ffmpeg is installed
if ! command -v ffmpeg &> /dev/null; then
print_error "ffmpeg is not installed. Please install ffmpeg first."
exit 1
fi
# Check if /storage directory exists
if [ ! -d "/storage" ]; then
print_error "/storage directory does not exist."
exit 1
fi
# Check if we have read/write permissions to /storage
if [ ! -r "/storage" ] || [ ! -w "/storage" ]; then
print_error "No read/write permissions for /storage directory."
exit 1
fi
print_status "Starting AVI to MP4 conversion in /storage directory..."
# Counter variables
total_files=0
converted_files=0
skipped_files=0
failed_files=0
# Find all AVI files in /storage directory (including subdirectories)
while IFS= read -r -d '' avi_file; do
total_files=$((total_files + 1))
# Get the directory and filename without extension
dir_path=$(dirname "$avi_file")
filename=$(basename "$avi_file" .avi)
mp4_file="$dir_path/$filename.mp4"
print_status "Processing: $avi_file"
# Check if MP4 file already exists
if [ -f "$mp4_file" ]; then
print_warning "MP4 file already exists: $mp4_file (skipping)"
skipped_files=$((skipped_files + 1))
continue
fi
# Get video duration for progress calculation
duration=$(get_duration "$avi_file")
if [ -z "$duration" ] || [ "$duration" -eq 0 ]; then
print_warning "Could not determine video duration, converting without progress bar..."
# Fallback to simple conversion without progress
if ffmpeg -i "$avi_file" -c:v libx264 -c:a aac -preset medium -crf 18 -nostdin "$mp4_file" -y 2>/dev/null; then
echo
print_success "Converted: $avi_file -> $mp4_file"
converted_files=$((converted_files + 1))
else
echo
print_error "Failed to convert: $avi_file"
failed_files=$((failed_files + 1))
fi
continue
fi
# Convert AVI to MP4 using H.264 codec with 95% quality (CRF 18) and show progress
echo "Converting... (Duration: ${duration}s)"
# Create a temporary file for ffmpeg progress
progress_file=$(mktemp)
# Start ffmpeg conversion in background with progress output
ffmpeg -i "$avi_file" -c:v libx264 -c:a aac -preset medium -crf 18 \
-progress "$progress_file" -nostats -loglevel 0 -nostdin "$mp4_file" -y &
ffmpeg_pid=$!
# Monitor progress
while kill -0 $ffmpeg_pid 2>/dev/null; do
if [ -f "$progress_file" ]; then
# Extract current time from progress file
current_time=$(tail -n 10 "$progress_file" 2>/dev/null | grep "out_time_ms=" | tail -n 1 | cut -d= -f2)
if [ -n "$current_time" ] && [ "$current_time" != "N/A" ]; then
# Convert microseconds to seconds
current_seconds=$((current_time / 1000000))
if [ "$current_seconds" -gt 0 ] && [ "$current_seconds" -le "$duration" ]; then
show_progress $current_seconds $duration
fi
fi
fi
sleep 0.5
done
# Wait for ffmpeg to complete and get exit status
wait $ffmpeg_pid
ffmpeg_exit_code=$?
# Clean up progress file
rm -f "$progress_file"
# Check if conversion was successful
if [ $ffmpeg_exit_code -eq 0 ] && [ -f "$mp4_file" ]; then
show_progress $duration $duration # Show 100% completion
echo
print_success "Converted: $avi_file -> $mp4_file"
converted_files=$((converted_files + 1))
# Optional: Remove original AVI file (uncomment the next line if you want this)
# rm "$avi_file"
else
echo
print_error "Failed to convert: $avi_file"
failed_files=$((failed_files + 1))
# Clean up incomplete file
[ -f "$mp4_file" ] && rm "$mp4_file"
fi
echo # Add blank line between files
done < <(find /storage -name "*.avi" -type f -print0)
# Print summary
echo
print_status "=== CONVERSION SUMMARY ==="
echo "Total AVI files found: $total_files"
echo "Successfully converted: $converted_files"
echo "Skipped (MP4 exists): $skipped_files"
echo "Failed conversions: $failed_files"
if [ $total_files -eq 0 ]; then
print_warning "No AVI files found in /storage directory."
elif [ $failed_files -eq 0 ] && [ $converted_files -gt 0 ]; then
print_success "All conversions completed successfully!"
elif [ $failed_files -gt 0 ]; then
print_warning "Some conversions failed. Check the output above for details."
fi

View File

@@ -0,0 +1,415 @@
# 🤖 AI Agent Video Integration Guide
This guide provides comprehensive step-by-step instructions for AI agents and external systems to successfully integrate with the USDA Vision Camera System's video streaming functionality.
## 🎯 Overview
The USDA Vision Camera System provides a complete video streaming API that allows AI agents to:
- Browse and select videos from multiple cameras
- Stream videos with seeking capabilities
- Generate thumbnails for preview
- Access video metadata and technical information
## 🔗 API Base Configuration
### Connection Details
```bash
# Default API Base URL
API_BASE_URL="http://localhost:8000"
# For remote access, replace with actual server IP/hostname
API_BASE_URL="http://192.168.1.100:8000"
```
### Authentication
**⚠️ IMPORTANT: No authentication is currently required.**
- All endpoints are publicly accessible
- No API keys or tokens needed
- CORS is enabled for web browser integration
## 📋 Step-by-Step Integration Workflow
### Step 1: Verify System Connectivity
```bash
# Test basic connectivity
curl -f "${API_BASE_URL}/health" || echo "❌ System not accessible"
# Check system status
curl "${API_BASE_URL}/system/status"
```
**Expected Response:**
```json
{
"status": "healthy",
"timestamp": "2025-08-05T10:30:00Z"
}
```
### Step 2: List Available Videos
```bash
# Get all videos with metadata
curl "${API_BASE_URL}/videos/?include_metadata=true&limit=50"
# Filter by specific camera
curl "${API_BASE_URL}/videos/?camera_name=camera1&include_metadata=true"
# Filter by date range
curl "${API_BASE_URL}/videos/?start_date=2025-08-04T00:00:00&end_date=2025-08-05T23:59:59"
```
**Response Structure:**
```json
{
"videos": [
{
"file_id": "camera1_auto_blower_separator_20250804_143022.mp4",
"camera_name": "camera1",
"filename": "camera1_auto_blower_separator_20250804_143022.mp4",
"file_size_bytes": 31457280,
"format": "mp4",
"status": "completed",
"created_at": "2025-08-04T14:30:22",
"start_time": "2025-08-04T14:30:22",
"end_time": "2025-08-04T14:32:22",
"machine_trigger": "blower_separator",
"is_streamable": true,
"needs_conversion": false,
"metadata": {
"duration_seconds": 120.5,
"width": 1920,
"height": 1080,
"fps": 30.0,
"codec": "mp4v",
"bitrate": 5000000,
"aspect_ratio": 1.777
}
}
],
"total_count": 1
}
```
### Step 3: Select and Validate Video
```bash
# Get detailed video information
FILE_ID="camera1_auto_blower_separator_20250804_143022.mp4"
curl "${API_BASE_URL}/videos/${FILE_ID}"
# Validate video is playable
curl -X POST "${API_BASE_URL}/videos/${FILE_ID}/validate"
# Get streaming technical details
curl "${API_BASE_URL}/videos/${FILE_ID}/info"
```
### Step 4: Generate Video Thumbnail
```bash
# Generate thumbnail at 5 seconds, 320x240 resolution
curl "${API_BASE_URL}/videos/${FILE_ID}/thumbnail?timestamp=5.0&width=320&height=240" \
--output "thumbnail_${FILE_ID}.jpg"
# Generate multiple thumbnails for preview
for timestamp in 1 30 60 90; do
curl "${API_BASE_URL}/videos/${FILE_ID}/thumbnail?timestamp=${timestamp}&width=160&height=120" \
--output "preview_${timestamp}s.jpg"
done
```
### Step 5: Stream Video Content
```bash
# Stream entire video
curl "${API_BASE_URL}/videos/${FILE_ID}/stream" --output "video.mp4"
# Stream specific byte range (for seeking)
curl -H "Range: bytes=0-1048575" \
"${API_BASE_URL}/videos/${FILE_ID}/stream" \
--output "video_chunk.mp4"
# Test range request support
curl -I -H "Range: bytes=0-1023" \
"${API_BASE_URL}/videos/${FILE_ID}/stream"
```
## 🔧 Programming Language Examples
### Python Integration
```python
import requests
import json
from typing import List, Dict, Optional
class USDAVideoClient:
def __init__(self, base_url: str = "http://localhost:8000"):
self.base_url = base_url.rstrip('/')
self.session = requests.Session()
def list_videos(self, camera_name: Optional[str] = None,
include_metadata: bool = True, limit: int = 50) -> Dict:
"""List available videos with optional filtering."""
params = {
'include_metadata': include_metadata,
'limit': limit
}
if camera_name:
params['camera_name'] = camera_name
response = self.session.get(f"{self.base_url}/videos/", params=params)
response.raise_for_status()
return response.json()
def get_video_info(self, file_id: str) -> Dict:
"""Get detailed video information."""
response = self.session.get(f"{self.base_url}/videos/{file_id}")
response.raise_for_status()
return response.json()
def get_thumbnail(self, file_id: str, timestamp: float = 1.0,
width: int = 320, height: int = 240) -> bytes:
"""Generate and download video thumbnail."""
params = {
'timestamp': timestamp,
'width': width,
'height': height
}
response = self.session.get(
f"{self.base_url}/videos/{file_id}/thumbnail",
params=params
)
response.raise_for_status()
return response.content
def stream_video_range(self, file_id: str, start_byte: int,
end_byte: int) -> bytes:
"""Stream specific byte range of video."""
headers = {'Range': f'bytes={start_byte}-{end_byte}'}
response = self.session.get(
f"{self.base_url}/videos/{file_id}/stream",
headers=headers
)
response.raise_for_status()
return response.content
def validate_video(self, file_id: str) -> bool:
"""Validate that video is accessible and playable."""
response = self.session.post(f"{self.base_url}/videos/{file_id}/validate")
response.raise_for_status()
return response.json().get('is_valid', False)
# Usage example
client = USDAVideoClient("http://192.168.1.100:8000")
# List videos from camera1
videos = client.list_videos(camera_name="camera1")
print(f"Found {videos['total_count']} videos")
# Select first video
if videos['videos']:
video = videos['videos'][0]
file_id = video['file_id']
# Validate video
if client.validate_video(file_id):
print(f"✅ Video {file_id} is valid")
# Get thumbnail
thumbnail = client.get_thumbnail(file_id, timestamp=5.0)
with open(f"thumbnail_{file_id}.jpg", "wb") as f:
f.write(thumbnail)
# Stream first 1MB
chunk = client.stream_video_range(file_id, 0, 1048575)
print(f"Downloaded {len(chunk)} bytes")
```
### JavaScript/Node.js Integration
```javascript
class USDAVideoClient {
constructor(baseUrl = 'http://localhost:8000') {
this.baseUrl = baseUrl.replace(/\/$/, '');
}
async listVideos(options = {}) {
const params = new URLSearchParams({
include_metadata: options.includeMetadata || true,
limit: options.limit || 50
});
if (options.cameraName) {
params.append('camera_name', options.cameraName);
}
const response = await fetch(`${this.baseUrl}/videos/?${params}`);
if (!response.ok) throw new Error(`HTTP ${response.status}`);
return response.json();
}
async getVideoInfo(fileId) {
const response = await fetch(`${this.baseUrl}/videos/${fileId}`);
if (!response.ok) throw new Error(`HTTP ${response.status}`);
return response.json();
}
async getThumbnail(fileId, options = {}) {
const params = new URLSearchParams({
timestamp: options.timestamp || 1.0,
width: options.width || 320,
height: options.height || 240
});
const response = await fetch(
`${this.baseUrl}/videos/${fileId}/thumbnail?${params}`
);
if (!response.ok) throw new Error(`HTTP ${response.status}`);
return response.blob();
}
async validateVideo(fileId) {
const response = await fetch(
`${this.baseUrl}/videos/${fileId}/validate`,
{ method: 'POST' }
);
if (!response.ok) throw new Error(`HTTP ${response.status}`);
const result = await response.json();
return result.is_valid;
}
getStreamUrl(fileId) {
return `${this.baseUrl}/videos/${fileId}/stream`;
}
}
// Usage example
const client = new USDAVideoClient('http://192.168.1.100:8000');
async function integrateWithVideos() {
try {
// List videos
const videos = await client.listVideos({ cameraName: 'camera1' });
console.log(`Found ${videos.total_count} videos`);
if (videos.videos.length > 0) {
const video = videos.videos[0];
const fileId = video.file_id;
// Validate video
const isValid = await client.validateVideo(fileId);
if (isValid) {
console.log(`✅ Video ${fileId} is valid`);
// Get thumbnail
const thumbnail = await client.getThumbnail(fileId, {
timestamp: 5.0,
width: 320,
height: 240
});
// Create video element for playback
const videoElement = document.createElement('video');
videoElement.controls = true;
videoElement.src = client.getStreamUrl(fileId);
document.body.appendChild(videoElement);
}
}
} catch (error) {
console.error('Integration error:', error);
}
}
```
## 🚨 Error Handling
### Common HTTP Status Codes
```bash
# Success responses
200 # OK - Request successful
206 # Partial Content - Range request successful
# Client error responses
400 # Bad Request - Invalid parameters
404 # Not Found - Video file doesn't exist
416 # Range Not Satisfiable - Invalid range request
# Server error responses
500 # Internal Server Error - Failed to process video
503 # Service Unavailable - Video module not available
```
### Error Response Format
```json
{
"detail": "Video camera1_recording_20250804_143022.avi not found"
}
```
### Robust Error Handling Example
```python
def safe_video_operation(client, file_id):
try:
# Validate video first
if not client.validate_video(file_id):
return {"error": "Video is not valid or accessible"}
# Get video info
video_info = client.get_video_info(file_id)
# Check if streamable
if not video_info.get('is_streamable', False):
return {"error": "Video is not streamable"}
return {"success": True, "video_info": video_info}
except requests.exceptions.HTTPError as e:
if e.response.status_code == 404:
return {"error": "Video not found"}
elif e.response.status_code == 416:
return {"error": "Invalid range request"}
else:
return {"error": f"HTTP error: {e.response.status_code}"}
except requests.exceptions.ConnectionError:
return {"error": "Cannot connect to video server"}
except Exception as e:
return {"error": f"Unexpected error: {str(e)}"}
```
## ✅ Integration Checklist
### Pre-Integration
- [ ] Verify network connectivity to USDA Vision Camera System
- [ ] Test basic API endpoints (`/health`, `/system/status`)
- [ ] Understand video file naming conventions
- [ ] Plan error handling strategy
### Video Selection
- [ ] Implement video listing with appropriate filters
- [ ] Add video validation before processing
- [ ] Handle pagination for large video collections
- [ ] Implement caching for video metadata
### Video Playback
- [ ] Test video streaming with range requests
- [ ] Implement thumbnail generation for previews
- [ ] Add progress tracking for video playback
- [ ] Handle different video formats (MP4, AVI)
### Error Handling
- [ ] Handle network connectivity issues
- [ ] Manage video not found scenarios
- [ ] Deal with invalid range requests
- [ ] Implement retry logic for transient failures
### Performance
- [ ] Use range requests for efficient seeking
- [ ] Implement client-side caching where appropriate
- [ ] Monitor bandwidth usage for video streaming
- [ ] Consider thumbnail caching for better UX
## 🎯 Next Steps
1. **Test Integration**: Use the provided examples to test basic connectivity
2. **Implement Error Handling**: Add robust error handling for production use
3. **Optimize Performance**: Implement caching and efficient streaming
4. **Monitor Usage**: Track API usage and performance metrics
5. **Security Review**: Consider authentication if exposing externally
This guide provides everything needed for successful integration with the USDA Vision Camera System's video streaming functionality. The system is designed to be simple and reliable for AI agents and external systems to consume video content efficiently.

View File

@@ -0,0 +1,207 @@
# API Changes Summary: Camera Settings and Video Format Updates
## Overview
This document tracks major API changes including camera settings enhancements and the MP4 video format update.
## 🎥 Latest Update: MP4 Video Format (v2.1)
**Date**: August 2025
**Major Changes**:
- **Video Format**: Changed from AVI/XVID to MP4/MPEG-4 format
- **File Extensions**: New recordings use `.mp4` instead of `.avi`
- **File Size**: ~40% reduction in file sizes
- **Streaming**: Better web browser compatibility
**New Configuration Fields**:
```json
{
"video_format": "mp4", // File format: "mp4" or "avi"
"video_codec": "mp4v", // Video codec: "mp4v", "XVID", "MJPG"
"video_quality": 95 // Quality: 0-100 (higher = better)
}
```
**Frontend Impact**:
- ✅ Better streaming performance and browser support
- ✅ Smaller file sizes for faster transfers
- ✅ Universal HTML5 video player compatibility
- ✅ Backward compatible with existing AVI files
**Documentation**: See [MP4 Format Update Guide](MP4_FORMAT_UPDATE.md)
---
## Previous Changes: Camera Settings and Filename Handling
Enhanced the `POST /cameras/{camera_name}/start-recording` API endpoint to accept optional camera settings (shutter speed/exposure, gain, and fps) and ensure all filenames have datetime prefixes.
## Changes Made
### 1. API Models (`usda_vision_system/api/models.py`)
- **Enhanced `StartRecordingRequest`** to include optional parameters:
- `exposure_ms: Optional[float]` - Exposure time in milliseconds
- `gain: Optional[float]` - Camera gain value
- `fps: Optional[float]` - Target frames per second
### 2. Camera Recorder (`usda_vision_system/camera/recorder.py`)
- **Added `update_camera_settings()` method** to dynamically update camera settings:
- Updates exposure time using `mvsdk.CameraSetExposureTime()`
- Updates gain using `mvsdk.CameraSetAnalogGain()`
- Updates target FPS in camera configuration
- Logs all setting changes
- Returns boolean indicating success/failure
### 3. Camera Manager (`usda_vision_system/camera/manager.py`)
- **Enhanced `manual_start_recording()` method** to accept new parameters:
- Added optional `exposure_ms`, `gain`, and `fps` parameters
- Calls `update_camera_settings()` if any settings are provided
- **Automatic datetime prefix**: Always prepends timestamp to filename
- If custom filename provided: `{timestamp}_{custom_filename}`
- If no filename provided: `{camera_name}_manual_{timestamp}.avi`
### 4. API Server (`usda_vision_system/api/server.py`)
- **Updated start-recording endpoint** to:
- Pass new camera settings to camera manager
- Handle filename response with datetime prefix
- Maintain backward compatibility with existing requests
### 5. API Tests (`api-tests.http`)
- **Added comprehensive test examples**:
- Basic recording (existing functionality)
- Recording with camera settings
- Recording with settings only (no filename)
- Different parameter combinations
## Usage Examples
### Basic Recording (unchanged)
```http
POST http://localhost:8000/cameras/camera1/start-recording
Content-Type: application/json
{
"camera_name": "camera1",
"filename": "test.avi"
}
```
**Result**: File saved as `20241223_143022_test.avi`
### Recording with Camera Settings
```http
POST http://localhost:8000/cameras/camera1/start-recording
Content-Type: application/json
{
"camera_name": "camera1",
"filename": "high_quality.avi",
"exposure_ms": 2.0,
"gain": 4.0,
"fps": 5.0
}
```
**Result**:
- Camera settings updated before recording
- File saved as `20241223_143022_high_quality.avi`
### Maximum FPS Recording
```http
POST http://localhost:8000/cameras/camera1/start-recording
Content-Type: application/json
{
"camera_name": "camera1",
"filename": "max_speed.avi",
"exposure_ms": 0.1,
"gain": 1.0,
"fps": 0
}
```
**Result**:
- Camera captures at maximum possible speed (no delay between frames)
- Video file saved with 30 FPS metadata for proper playback
- Actual capture rate depends on camera hardware and exposure settings
### Settings Only (no filename)
```http
POST http://localhost:8000/cameras/camera1/start-recording
Content-Type: application/json
{
"camera_name": "camera1",
"exposure_ms": 1.5,
"gain": 3.0,
"fps": 7.0
}
```
**Result**:
- Camera settings updated
- File saved as `camera1_manual_20241223_143022.avi`
## Key Features
### 1. **Backward Compatibility**
- All existing API calls continue to work unchanged
- New parameters are optional
- Default behavior preserved when no settings provided
### 2. **Automatic Datetime Prefix**
- **ALL filenames now have datetime prefix** regardless of what's sent
- Format: `YYYYMMDD_HHMMSS_` (Atlanta timezone)
- Ensures unique filenames and chronological ordering
### 3. **Dynamic Camera Settings**
- Settings can be changed per recording without restarting system
- Based on proven implementation from `old tests/camera_video_recorder.py`
- Proper error handling and logging
### 4. **Maximum FPS Capture**
- **`fps: 0`** = Capture at maximum possible speed (no delay between frames)
- **`fps > 0`** = Capture at specified frame rate with controlled timing
- **`fps` omitted** = Uses camera config default (usually 3.0 fps)
- Video files saved with 30 FPS metadata when fps=0 for proper playback
### 5. **Parameter Validation**
- Uses Pydantic models for automatic validation
- Optional parameters with proper type checking
- Descriptive field documentation
## Testing
Run the test script to verify functionality:
```bash
# Start the system first
python main.py
# In another terminal, run tests
python test_api_changes.py
```
The test script verifies:
- Basic recording functionality
- Camera settings application
- Filename datetime prefix handling
- API response accuracy
## Implementation Notes
### Camera Settings Mapping
- **Exposure**: Converted from milliseconds to microseconds for SDK
- **Gain**: Converted to camera units (multiplied by 100)
- **FPS**: Stored in camera config, used by recording loop
### Error Handling
- Settings update failures are logged but don't prevent recording
- Invalid camera names return appropriate HTTP errors
- Camera initialization failures are handled gracefully
### Filename Generation
- Uses `format_filename_timestamp()` from timezone utilities
- Ensures Atlanta timezone consistency
- Handles both custom and auto-generated filenames
## Similar to Old Implementation
The camera settings functionality mirrors the proven approach in `old tests/camera_video_recorder.py`:
- Same parameter names and ranges
- Same SDK function calls
- Same conversion factors
- Proven to work with the camera hardware

View File

@@ -0,0 +1,824 @@
# 🚀 USDA Vision Camera System - Complete API Documentation
This document provides comprehensive documentation for all API endpoints in the USDA Vision Camera System, including recent enhancements and new features.
## 📋 Table of Contents
- [🔧 System Status & Health](#-system-status--health)
- [📷 Camera Management](#-camera-management)
- [🎥 Recording Control](#-recording-control)
- [🤖 Auto-Recording Management](#-auto-recording-management)
- [🎛️ Camera Configuration](#-camera-configuration)
- [📡 MQTT & Machine Status](#-mqtt--machine-status)
- [💾 Storage & File Management](#-storage--file-management)
- [🔄 Camera Recovery & Diagnostics](#-camera-recovery--diagnostics)
- [📺 Live Streaming](#-live-streaming)
- [🎬 Video Streaming & Playback](#-video-streaming--playback)
- [🌐 WebSocket Real-time Updates](#-websocket-real-time-updates)
## 🔧 System Status & Health
### Get System Status
```http
GET /system/status
```
**Response**: `SystemStatusResponse`
```json
{
"system_started": true,
"mqtt_connected": true,
"last_mqtt_message": "2024-01-15T10:30:00Z",
"machines": {
"vibratory_conveyor": {
"name": "vibratory_conveyor",
"state": "ON",
"last_updated": "2024-01-15T10:30:00Z"
}
},
"cameras": {
"camera1": {
"name": "camera1",
"status": "ACTIVE",
"is_recording": false,
"auto_recording_enabled": true
}
},
"active_recordings": 0,
"total_recordings": 15,
"uptime_seconds": 3600.5
}
```
### Health Check
```http
GET /health
```
**Response**: Simple health status
```json
{
"status": "healthy",
"timestamp": "2024-01-15T10:30:00Z"
}
```
## 📷 Camera Management
### Get All Cameras
```http
GET /cameras
```
**Response**: `Dict[str, CameraStatusResponse]`
### Get Specific Camera Status
```http
GET /cameras/{camera_name}/status
```
**Response**: `CameraStatusResponse`
```json
{
"name": "camera1",
"status": "ACTIVE",
"is_recording": false,
"last_checked": "2024-01-15T10:30:00Z",
"last_error": null,
"device_info": {
"model": "GigE Camera",
"serial": "12345"
},
"current_recording_file": null,
"recording_start_time": null,
"auto_recording_enabled": true,
"auto_recording_active": false,
"auto_recording_failure_count": 0,
"auto_recording_last_attempt": null,
"auto_recording_last_error": null
}
```
## 🎥 Recording Control
### Start Recording
```http
POST /cameras/{camera_name}/start-recording
Content-Type: application/json
{
"filename": "test_recording.avi",
"exposure_ms": 2.0,
"gain": 4.0,
"fps": 5.0
}
```
**Request Model**: `StartRecordingRequest`
- `filename` (optional): Custom filename (datetime prefix will be added automatically)
- `exposure_ms` (optional): Exposure time in milliseconds
- `gain` (optional): Camera gain value
- `fps` (optional): Target frames per second
**Response**: `StartRecordingResponse`
```json
{
"success": true,
"message": "Recording started for camera1",
"filename": "20240115_103000_test_recording.avi"
}
```
**Key Features**:
-**Automatic datetime prefix**: All filenames get `YYYYMMDD_HHMMSS_` prefix
-**Dynamic camera settings**: Adjust exposure, gain, and FPS per recording
-**Backward compatibility**: All existing API calls work unchanged
### Stop Recording
```http
POST /cameras/{camera_name}/stop-recording
```
**Response**: `StopRecordingResponse`
```json
{
"success": true,
"message": "Recording stopped for camera1",
"duration_seconds": 45.2
}
```
## 🤖 Auto-Recording Management
### Enable Auto-Recording for Camera
```http
POST /cameras/{camera_name}/auto-recording/enable
```
**Response**: `AutoRecordingConfigResponse`
```json
{
"success": true,
"message": "Auto-recording enabled for camera1",
"camera_name": "camera1",
"enabled": true
}
```
### Disable Auto-Recording for Camera
```http
POST /cameras/{camera_name}/auto-recording/disable
```
**Response**: `AutoRecordingConfigResponse`
### Get Auto-Recording Status
```http
GET /auto-recording/status
```
**Response**: `AutoRecordingStatusResponse`
```json
{
"running": true,
"auto_recording_enabled": true,
"retry_queue": {},
"enabled_cameras": ["camera1", "camera2"]
}
```
**Auto-Recording Features**:
- 🤖 **MQTT-triggered recording**: Automatically starts/stops based on machine state
- 🔄 **Retry logic**: Failed recordings are retried with configurable delays
- 📊 **Per-camera control**: Enable/disable auto-recording individually
- 📈 **Status tracking**: Monitor failure counts and last attempts
## 🎛️ Camera Configuration
### Get Camera Configuration
```http
GET /cameras/{camera_name}/config
```
**Response**: `CameraConfigResponse`
```json
{
"name": "camera1",
"machine_topic": "blower_separator",
"storage_path": "/storage/camera1",
"exposure_ms": 0.3,
"gain": 4.0,
"target_fps": 0,
"enabled": true,
"video_format": "mp4",
"video_codec": "mp4v",
"video_quality": 95,
"auto_start_recording_enabled": true,
"auto_recording_max_retries": 3,
"auto_recording_retry_delay_seconds": 2,
"contrast": 100,
"saturation": 100,
"gamma": 100,
"noise_filter_enabled": false,
"denoise_3d_enabled": false,
"auto_white_balance": false,
"color_temperature_preset": 0,
"wb_red_gain": 0.94,
"wb_green_gain": 1.0,
"wb_blue_gain": 0.87,
"anti_flicker_enabled": false,
"light_frequency": 0,
"bit_depth": 8,
"hdr_enabled": false,
"hdr_gain_mode": 2
}
```
### Update Camera Configuration
```http
PUT /cameras/{camera_name}/config
Content-Type: application/json
{
"exposure_ms": 2.0,
"gain": 4.0,
"target_fps": 5.0,
"sharpness": 130
}
```
### Apply Configuration (Restart Required)
```http
POST /cameras/{camera_name}/apply-config
```
**Configuration Categories**:
-**Real-time**: `exposure_ms`, `gain`, `target_fps`, `sharpness`, `contrast`, etc.
- ⚠️ **Restart required**: `noise_filter_enabled`, `denoise_3d_enabled`, `bit_depth`, `video_format`, `video_codec`, `video_quality`
For detailed configuration options, see [Camera Configuration API Guide](api/CAMERA_CONFIG_API.md).
## 📡 MQTT & Machine Status
### Get All Machines
```http
GET /machines
```
**Response**: `Dict[str, MachineStatusResponse]`
### Get MQTT Status
```http
GET /mqtt/status
```
**Response**: `MQTTStatusResponse`
```json
{
"connected": true,
"broker_host": "192.168.1.110",
"broker_port": 1883,
"subscribed_topics": ["vibratory_conveyor", "blower_separator"],
"last_message_time": "2024-01-15T10:30:00Z",
"message_count": 1250,
"error_count": 2,
"uptime_seconds": 3600.5
}
```
### Get MQTT Events History
```http
GET /mqtt/events?limit=10
```
**Response**: `MQTTEventsHistoryResponse`
```json
{
"events": [
{
"machine_name": "vibratory_conveyor",
"topic": "vibratory_conveyor",
"payload": "ON",
"normalized_state": "ON",
"timestamp": "2024-01-15T10:30:00Z",
"message_number": 1250
}
],
"total_events": 1250,
"last_updated": "2024-01-15T10:30:00Z"
}
```
## 💾 Storage & File Management
### Get Storage Statistics
```http
GET /storage/stats
```
**Response**: `StorageStatsResponse`
```json
{
"base_path": "/storage",
"total_files": 150,
"total_size_bytes": 5368709120,
"cameras": {
"camera1": {
"file_count": 75,
"total_size_bytes": 2684354560
},
"camera2": {
"file_count": 75,
"total_size_bytes": 2684354560
}
},
"disk_usage": {
"total_bytes": 107374182400,
"used_bytes": 53687091200,
"free_bytes": 53687091200,
"usage_percent": 50.0
}
}
```
### Get File List
```http
POST /storage/files
Content-Type: application/json
{
"camera_name": "camera1",
"start_date": "2024-01-15",
"end_date": "2024-01-16",
"limit": 50
}
```
**Response**: `FileListResponse`
```json
{
"files": [
{
"filename": "20240115_103000_test_recording.avi",
"camera_name": "camera1",
"size_bytes": 52428800,
"created_time": "2024-01-15T10:30:00Z",
"duration_seconds": 30.5
}
],
"total_count": 1
}
```
### Cleanup Old Files
```http
POST /storage/cleanup
Content-Type: application/json
{
"max_age_days": 30
}
```
**Response**: `CleanupResponse`
```json
{
"files_removed": 25,
"bytes_freed": 1073741824,
"errors": []
}
```
## 🔄 Camera Recovery & Diagnostics
### Test Camera Connection
```http
POST /cameras/{camera_name}/test-connection
```
**Response**: `CameraTestResponse`
### Reconnect Camera
```http
POST /cameras/{camera_name}/reconnect
```
**Response**: `CameraRecoveryResponse`
### Restart Camera Grab Process
```http
POST /cameras/{camera_name}/restart-grab
```
**Response**: `CameraRecoveryResponse`
### Reset Camera Timestamp
```http
POST /cameras/{camera_name}/reset-timestamp
```
**Response**: `CameraRecoveryResponse`
### Full Camera Reset
```http
POST /cameras/{camera_name}/full-reset
```
**Response**: `CameraRecoveryResponse`
### Reinitialize Camera
```http
POST /cameras/{camera_name}/reinitialize
```
**Response**: `CameraRecoveryResponse`
**Recovery Response Example**:
```json
{
"success": true,
"message": "Camera camera1 reconnected successfully",
"camera_name": "camera1",
"operation": "reconnect",
"timestamp": "2024-01-15T10:30:00Z"
}
```
## 📺 Live Streaming
### Get Live MJPEG Stream
```http
GET /cameras/{camera_name}/stream
```
**Response**: MJPEG video stream (multipart/x-mixed-replace)
### Start Camera Stream
```http
POST /cameras/{camera_name}/start-stream
```
### Stop Camera Stream
```http
POST /cameras/{camera_name}/stop-stream
```
**Streaming Features**:
- 📺 **MJPEG format**: Compatible with web browsers and React apps
- 🔄 **Concurrent operation**: Stream while recording simultaneously
-**Low latency**: Real-time preview for monitoring
For detailed streaming integration, see [Streaming Guide](guides/STREAMING_GUIDE.md).
## 🎬 Video Streaming & Playback
The system includes a comprehensive video streaming module that provides YouTube-like video playback capabilities with HTTP range request support, thumbnail generation, and intelligent caching.
### List Videos
```http
GET /videos/
```
**Query Parameters:**
- `camera_name` (optional): Filter by camera name
- `start_date` (optional): Filter videos created after this date (ISO format)
- `end_date` (optional): Filter videos created before this date (ISO format)
- `limit` (optional): Maximum number of results (default: 50, max: 1000)
- `include_metadata` (optional): Include video metadata (default: false)
**Response**: `VideoListResponse`
```json
{
"videos": [
{
"file_id": "camera1_auto_blower_separator_20250804_143022.mp4",
"camera_name": "camera1",
"filename": "camera1_auto_blower_separator_20250804_143022.mp4",
"file_size_bytes": 31457280,
"format": "mp4",
"status": "completed",
"created_at": "2025-08-04T14:30:22",
"start_time": "2025-08-04T14:30:22",
"end_time": "2025-08-04T14:32:22",
"machine_trigger": "blower_separator",
"is_streamable": true,
"needs_conversion": false,
"metadata": {
"duration_seconds": 120.5,
"width": 1920,
"height": 1080,
"fps": 30.0,
"codec": "mp4v",
"bitrate": 5000000,
"aspect_ratio": 1.777
}
}
],
"total_count": 1
}
```
### Get Video Information
```http
GET /videos/{file_id}
```
**Response**: `VideoInfoResponse` with detailed video information including metadata.
### Stream Video
```http
GET /videos/{file_id}/stream
```
**Headers:**
- `Range: bytes=0-1023` (optional): Request specific byte range for seeking
**Features:**
-**HTTP Range Requests**: Enables video seeking and progressive download
-**Partial Content**: Returns 206 status for range requests
-**Format Conversion**: Automatic AVI to MP4 conversion for web compatibility
-**Intelligent Caching**: Optimized performance with byte-range caching
-**CORS Enabled**: Ready for web browser integration
**Response Headers:**
- `Accept-Ranges: bytes`
- `Content-Length: {size}`
- `Content-Range: bytes {start}-{end}/{total}` (for range requests)
- `Cache-Control: public, max-age=3600`
### Get Video Thumbnail
```http
GET /videos/{file_id}/thumbnail?timestamp=5.0&width=320&height=240
```
**Query Parameters:**
- `timestamp` (optional): Time position in seconds (default: 1.0)
- `width` (optional): Thumbnail width in pixels (default: 320)
- `height` (optional): Thumbnail height in pixels (default: 240)
**Response**: JPEG image data with caching headers
### Get Streaming Information
```http
GET /videos/{file_id}/info
```
**Response**: `StreamingInfoResponse`
```json
{
"file_id": "camera1_recording_20250804_143022.avi",
"file_size_bytes": 52428800,
"content_type": "video/mp4",
"supports_range_requests": true,
"chunk_size_bytes": 262144
}
```
### Video Validation
```http
POST /videos/{file_id}/validate
```
**Response**: Validation status and accessibility check
```json
{
"file_id": "camera1_recording_20250804_143022.avi",
"is_valid": true
}
```
### Cache Management
```http
POST /videos/{file_id}/cache/invalidate
```
**Response**: Cache invalidation status
```json
{
"file_id": "camera1_recording_20250804_143022.avi",
"cache_invalidated": true
}
```
### Admin: Cache Cleanup
```http
POST /admin/videos/cache/cleanup?max_size_mb=100
```
**Response**: Cache cleanup results
```json
{
"cache_cleaned": true,
"entries_removed": 15,
"max_size_mb": 100
}
```
**Video Streaming Features**:
- 🎥 **Multiple Formats**: Native MP4 support with AVI conversion
- 📱 **Web Compatible**: Direct integration with HTML5 video elements
-**High Performance**: Intelligent caching and adaptive chunking
- 🖼️ **Thumbnail Generation**: Extract preview images at any timestamp
- 🔄 **Range Requests**: Efficient seeking and progressive download
## 🌐 WebSocket Real-time Updates
### Connect to WebSocket
```javascript
const ws = new WebSocket('ws://localhost:8000/ws');
ws.onmessage = (event) => {
const update = JSON.parse(event.data);
console.log('Real-time update:', update);
};
```
**WebSocket Message Types**:
- `system_status`: System status changes
- `camera_status`: Camera status updates
- `recording_started`: Recording start events
- `recording_stopped`: Recording stop events
- `mqtt_message`: MQTT message received
- `auto_recording_event`: Auto-recording status changes
**Example WebSocket Message**:
```json
{
"type": "recording_started",
"data": {
"camera_name": "camera1",
"filename": "20240115_103000_auto_recording.avi",
"timestamp": "2024-01-15T10:30:00Z"
},
"timestamp": "2024-01-15T10:30:00Z"
}
```
## 🚀 Quick Start Examples
### Basic System Monitoring
```bash
# Check system health
curl http://localhost:8000/health
# Get overall system status
curl http://localhost:8000/system/status
# Get all camera statuses
curl http://localhost:8000/cameras
```
### Manual Recording Control
```bash
# Start recording with default settings
curl -X POST http://localhost:8000/cameras/camera1/start-recording \
-H "Content-Type: application/json" \
-d '{"filename": "manual_test.avi"}'
# Start recording with custom camera settings
curl -X POST http://localhost:8000/cameras/camera1/start-recording \
-H "Content-Type: application/json" \
-d '{
"filename": "high_quality.avi",
"exposure_ms": 2.0,
"gain": 4.0,
"fps": 5.0
}'
# Stop recording
curl -X POST http://localhost:8000/cameras/camera1/stop-recording
```
### Auto-Recording Management
```bash
# Enable auto-recording for camera1
curl -X POST http://localhost:8000/cameras/camera1/auto-recording/enable
# Check auto-recording status
curl http://localhost:8000/auto-recording/status
# Disable auto-recording for camera1
curl -X POST http://localhost:8000/cameras/camera1/auto-recording/disable
```
### Video Streaming Operations
```bash
# List all videos
curl http://localhost:8000/videos/
# List videos from specific camera with metadata
curl "http://localhost:8000/videos/?camera_name=camera1&include_metadata=true"
# Get video information
curl http://localhost:8000/videos/camera1_recording_20250804_143022.avi
# Get video thumbnail
curl "http://localhost:8000/videos/camera1_recording_20250804_143022.avi/thumbnail?timestamp=5.0&width=320&height=240" \
--output thumbnail.jpg
# Get streaming info
curl http://localhost:8000/videos/camera1_recording_20250804_143022.avi/info
# Stream video with range request
curl -H "Range: bytes=0-1023" \
http://localhost:8000/videos/camera1_recording_20250804_143022.avi/stream
# Validate video file
curl -X POST http://localhost:8000/videos/camera1_recording_20250804_143022.avi/validate
# Clean up video cache (admin)
curl -X POST "http://localhost:8000/admin/videos/cache/cleanup?max_size_mb=100"
```
### Camera Configuration
```bash
# Get current camera configuration
curl http://localhost:8000/cameras/camera1/config
# Update camera settings (real-time)
curl -X PUT http://localhost:8000/cameras/camera1/config \
-H "Content-Type: application/json" \
-d '{
"exposure_ms": 1.5,
"gain": 3.0,
"sharpness": 130,
"contrast": 120
}'
```
## 📈 Recent API Changes & Enhancements
### ✨ New in Latest Version
#### 1. Enhanced Recording API
- **Dynamic camera settings**: Set exposure, gain, and FPS per recording
- **Automatic datetime prefixes**: All filenames get timestamp prefixes
- **Backward compatibility**: Existing API calls work unchanged
#### 2. Auto-Recording Feature
- **Per-camera control**: Enable/disable auto-recording individually
- **MQTT integration**: Automatic recording based on machine states
- **Retry logic**: Failed recordings are automatically retried
- **Status tracking**: Monitor auto-recording attempts and failures
#### 3. Advanced Camera Configuration
- **Real-time settings**: Update exposure, gain, image quality without restart
- **Image enhancement**: Sharpness, contrast, saturation, gamma controls
- **Noise reduction**: Configurable noise filtering and 3D denoising
- **HDR support**: High Dynamic Range imaging capabilities
#### 4. Live Streaming
- **MJPEG streaming**: Real-time camera preview
- **Concurrent operation**: Stream while recording simultaneously
- **Web-compatible**: Direct integration with React/HTML video elements
#### 5. Enhanced Monitoring
- **MQTT event history**: Track machine state changes over time
- **Storage statistics**: Monitor disk usage and file counts
- **WebSocket updates**: Real-time system status notifications
#### 6. Video Streaming Module
- **HTTP Range Requests**: Efficient video seeking and progressive download
- **Thumbnail Generation**: Extract preview images from videos at any timestamp
- **Format Conversion**: Automatic AVI to MP4 conversion for web compatibility
- **Intelligent Caching**: Byte-range caching for optimal streaming performance
- **Admin Tools**: Cache management and video validation endpoints
### 🔄 Migration Notes
#### From Previous Versions
1. **Recording API**: All existing calls work, but now return filenames with datetime prefixes
2. **Configuration**: New camera settings are optional and backward compatible
3. **Auto-recording**: New feature, requires enabling in `config.json` and per camera
#### Configuration Updates
```json
{
"cameras": [
{
"name": "camera1",
"auto_start_recording_enabled": true, // NEW: Enable auto-recording
"sharpness": 120, // NEW: Image quality settings
"contrast": 110,
"saturation": 100,
"gamma": 100,
"noise_filter_enabled": true,
"hdr_enabled": false
}
],
"system": {
"auto_recording_enabled": true // NEW: Global auto-recording toggle
}
}
```
## 🔗 Related Documentation
- [📷 Camera Configuration API Guide](api/CAMERA_CONFIG_API.md) - Detailed camera settings
- [🤖 Auto-Recording Feature Guide](features/AUTO_RECORDING_FEATURE_GUIDE.md) - React integration
- [📺 Streaming Guide](guides/STREAMING_GUIDE.md) - Live video streaming
- [🎬 Video Streaming Guide](VIDEO_STREAMING.md) - Video playback and streaming
- [🤖 AI Agent Video Integration Guide](AI_AGENT_VIDEO_INTEGRATION_GUIDE.md) - Complete integration guide for AI agents
- [🔧 Camera Recovery Guide](guides/CAMERA_RECOVERY_GUIDE.md) - Troubleshooting
- [📡 MQTT Logging Guide](guides/MQTT_LOGGING_GUIDE.md) - MQTT configuration
## 📞 Support & Integration
### API Base URL
- **Development**: `http://localhost:8000`
- **Production**: Configure in `config.json` under `system.api_host` and `system.api_port`
### Error Handling
All endpoints return standard HTTP status codes:
- `200`: Success
- `206`: Partial Content (for video range requests)
- `400`: Bad Request (invalid parameters)
- `404`: Resource not found (camera, file, video, etc.)
- `416`: Range Not Satisfiable (invalid video range request)
- `500`: Internal server error
- `503`: Service unavailable (camera manager, MQTT, etc.)
**Video Streaming Specific Errors:**
- `404`: Video file not found or not streamable
- `416`: Invalid range request (malformed Range header)
- `500`: Failed to read video data or generate thumbnail
### Rate Limiting
- No rate limiting currently implemented
- WebSocket connections are limited to reasonable concurrent connections
### CORS Support
- CORS is enabled for web dashboard integration
- Configure allowed origins in the API server settings
```
```

View File

@@ -0,0 +1,195 @@
# 🚀 USDA Vision Camera System - API Quick Reference
Quick reference for the most commonly used API endpoints. For complete documentation, see [API_DOCUMENTATION.md](API_DOCUMENTATION.md).
## 🔧 System Status
```bash
# Health check
curl http://localhost:8000/health
# System overview
curl http://localhost:8000/system/status
# All cameras
curl http://localhost:8000/cameras
# All machines
curl http://localhost:8000/machines
```
## 🎥 Recording Control
### Start Recording (Basic)
```bash
curl -X POST http://localhost:8000/cameras/camera1/start-recording \
-H "Content-Type: application/json" \
-d '{"filename": "test.avi"}'
```
### Start Recording (With Settings)
```bash
curl -X POST http://localhost:8000/cameras/camera1/start-recording \
-H "Content-Type: application/json" \
-d '{
"filename": "high_quality.avi",
"exposure_ms": 2.0,
"gain": 4.0,
"fps": 5.0
}'
```
### Stop Recording
```bash
curl -X POST http://localhost:8000/cameras/camera1/stop-recording
```
## 🤖 Auto-Recording
```bash
# Enable auto-recording
curl -X POST http://localhost:8000/cameras/camera1/auto-recording/enable
# Disable auto-recording
curl -X POST http://localhost:8000/cameras/camera1/auto-recording/disable
# Check auto-recording status
curl http://localhost:8000/auto-recording/status
```
## 🎛️ Camera Configuration
```bash
# Get camera config
curl http://localhost:8000/cameras/camera1/config
# Update camera settings
curl -X PUT http://localhost:8000/cameras/camera1/config \
-H "Content-Type: application/json" \
-d '{
"exposure_ms": 1.5,
"gain": 3.0,
"sharpness": 130
}'
```
## 📺 Live Streaming
```bash
# Start streaming
curl -X POST http://localhost:8000/cameras/camera1/start-stream
# Get MJPEG stream (use in browser/video element)
# http://localhost:8000/cameras/camera1/stream
# Stop streaming
curl -X POST http://localhost:8000/cameras/camera1/stop-stream
```
## 🔄 Camera Recovery
```bash
# Test connection
curl -X POST http://localhost:8000/cameras/camera1/test-connection
# Reconnect camera
curl -X POST http://localhost:8000/cameras/camera1/reconnect
# Full reset
curl -X POST http://localhost:8000/cameras/camera1/full-reset
```
## 💾 Storage Management
```bash
# Storage statistics
curl http://localhost:8000/storage/stats
# List files
curl -X POST http://localhost:8000/storage/files \
-H "Content-Type: application/json" \
-d '{"camera_name": "camera1", "limit": 10}'
# Cleanup old files
curl -X POST http://localhost:8000/storage/cleanup \
-H "Content-Type: application/json" \
-d '{"max_age_days": 30}'
```
## 📡 MQTT Monitoring
```bash
# MQTT status
curl http://localhost:8000/mqtt/status
# Recent MQTT events
curl http://localhost:8000/mqtt/events?limit=10
```
## 🌐 WebSocket Connection
```javascript
// Connect to real-time updates
const ws = new WebSocket('ws://localhost:8000/ws');
ws.onmessage = (event) => {
const update = JSON.parse(event.data);
console.log('Update:', update);
};
```
## 📊 Response Examples
### System Status Response
```json
{
"system_started": true,
"mqtt_connected": true,
"cameras": {
"camera1": {
"name": "camera1",
"status": "ACTIVE",
"is_recording": false,
"auto_recording_enabled": true
}
},
"active_recordings": 0,
"total_recordings": 15
}
```
### Recording Start Response
```json
{
"success": true,
"message": "Recording started for camera1",
"filename": "20240115_103000_test.avi"
}
```
### Camera Status Response
```json
{
"name": "camera1",
"status": "ACTIVE",
"is_recording": false,
"auto_recording_enabled": true,
"auto_recording_active": false,
"auto_recording_failure_count": 0
}
```
## 🔗 Related Documentation
- [📚 Complete API Documentation](API_DOCUMENTATION.md)
- [🎛️ Camera Configuration Guide](api/CAMERA_CONFIG_API.md)
- [🤖 Auto-Recording Feature Guide](features/AUTO_RECORDING_FEATURE_GUIDE.md)
- [📺 Streaming Guide](guides/STREAMING_GUIDE.md)
## 💡 Tips
- All filenames automatically get datetime prefixes: `YYYYMMDD_HHMMSS_`
- Camera settings can be updated in real-time during recording
- Auto-recording is controlled per camera and globally
- WebSocket provides real-time updates for dashboard integration
- CORS is enabled for web application integration

View File

@@ -0,0 +1,217 @@
# 📋 Current System Configuration Reference
## Overview
This document shows the exact current configuration structure of the USDA Vision Camera System, including all fields and their current values.
## 🔧 Complete Configuration Structure
### System Configuration (`config.json`)
```json
{
"mqtt": {
"broker_host": "192.168.1.110",
"broker_port": 1883,
"username": null,
"password": null,
"topics": {
"vibratory_conveyor": "vision/vibratory_conveyor/state",
"blower_separator": "vision/blower_separator/state"
}
},
"storage": {
"base_path": "/storage",
"max_file_size_mb": 1000,
"max_recording_duration_minutes": 60,
"cleanup_older_than_days": 30
},
"system": {
"camera_check_interval_seconds": 2,
"log_level": "DEBUG",
"log_file": "usda_vision_system.log",
"api_host": "0.0.0.0",
"api_port": 8000,
"enable_api": true,
"timezone": "America/New_York",
"auto_recording_enabled": true
},
"cameras": [
{
"name": "camera1",
"machine_topic": "blower_separator",
"storage_path": "/storage/camera1",
"exposure_ms": 0.3,
"gain": 4.0,
"target_fps": 0,
"enabled": true,
"video_format": "mp4",
"video_codec": "mp4v",
"video_quality": 95,
"auto_start_recording_enabled": true,
"auto_recording_max_retries": 3,
"auto_recording_retry_delay_seconds": 2,
"sharpness": 0,
"contrast": 100,
"saturation": 100,
"gamma": 100,
"noise_filter_enabled": false,
"denoise_3d_enabled": false,
"auto_white_balance": false,
"color_temperature_preset": 0,
"wb_red_gain": 0.94,
"wb_green_gain": 1.0,
"wb_blue_gain": 0.87,
"anti_flicker_enabled": false,
"light_frequency": 0,
"bit_depth": 8,
"hdr_enabled": false,
"hdr_gain_mode": 2
},
{
"name": "camera2",
"machine_topic": "vibratory_conveyor",
"storage_path": "/storage/camera2",
"exposure_ms": 0.2,
"gain": 2.0,
"target_fps": 0,
"enabled": true,
"video_format": "mp4",
"video_codec": "mp4v",
"video_quality": 95,
"auto_start_recording_enabled": true,
"auto_recording_max_retries": 3,
"auto_recording_retry_delay_seconds": 2,
"sharpness": 0,
"contrast": 100,
"saturation": 100,
"gamma": 100,
"noise_filter_enabled": false,
"denoise_3d_enabled": false,
"auto_white_balance": false,
"color_temperature_preset": 0,
"wb_red_gain": 1.01,
"wb_green_gain": 1.0,
"wb_blue_gain": 0.87,
"anti_flicker_enabled": false,
"light_frequency": 0,
"bit_depth": 8,
"hdr_enabled": false,
"hdr_gain_mode": 0
}
]
}
```
## 📊 Configuration Field Reference
### MQTT Settings
| Field | Value | Description |
|-------|-------|-------------|
| `broker_host` | `"192.168.1.110"` | MQTT broker IP address |
| `broker_port` | `1883` | MQTT broker port |
| `username` | `null` | MQTT authentication (not used) |
| `password` | `null` | MQTT authentication (not used) |
### MQTT Topics
| Machine | Topic | Camera |
|---------|-------|--------|
| Vibratory Conveyor | `vision/vibratory_conveyor/state` | camera2 |
| Blower Separator | `vision/blower_separator/state` | camera1 |
### Storage Settings
| Field | Value | Description |
|-------|-------|-------------|
| `base_path` | `"/storage"` | Root storage directory |
| `max_file_size_mb` | `1000` | Maximum file size (1GB) |
| `max_recording_duration_minutes` | `60` | Maximum recording duration |
| `cleanup_older_than_days` | `30` | Auto-cleanup threshold |
### System Settings
| Field | Value | Description |
|-------|-------|-------------|
| `camera_check_interval_seconds` | `2` | Camera health check interval |
| `log_level` | `"DEBUG"` | Logging verbosity |
| `api_host` | `"0.0.0.0"` | API server bind address |
| `api_port` | `8000` | API server port |
| `timezone` | `"America/New_York"` | System timezone |
| `auto_recording_enabled` | `true` | Enable MQTT-triggered recording |
## 🎥 Camera Configuration Details
### Camera 1 (Blower Separator)
| Setting | Value | Description |
|---------|-------|-------------|
| **Basic Settings** | | |
| `name` | `"camera1"` | Camera identifier |
| `machine_topic` | `"blower_separator"` | MQTT topic to monitor |
| `storage_path` | `"/storage/camera1"` | Video storage location |
| `exposure_ms` | `0.3` | Exposure time (milliseconds) |
| `gain` | `4.0` | Camera gain multiplier |
| `target_fps` | `0` | Target FPS (0 = unlimited) |
| **Video Recording** | | |
| `video_format` | `"mp4"` | Video file format |
| `video_codec` | `"mp4v"` | Video codec (MPEG-4) |
| `video_quality` | `95` | Video quality (0-100) |
| **Auto Recording** | | |
| `auto_start_recording_enabled` | `true` | Enable auto-recording |
| `auto_recording_max_retries` | `3` | Max retry attempts |
| `auto_recording_retry_delay_seconds` | `2` | Delay between retries |
| **Image Quality** | | |
| `sharpness` | `0` | Sharpness adjustment |
| `contrast` | `100` | Contrast level |
| `saturation` | `100` | Color saturation |
| `gamma` | `100` | Gamma correction |
| **White Balance** | | |
| `auto_white_balance` | `false` | Auto white balance disabled |
| `wb_red_gain` | `0.94` | Red channel gain |
| `wb_green_gain` | `1.0` | Green channel gain |
| `wb_blue_gain` | `0.87` | Blue channel gain |
| **Advanced** | | |
| `bit_depth` | `8` | Color bit depth |
| `hdr_enabled` | `false` | HDR disabled |
| `hdr_gain_mode` | `2` | HDR gain mode |
### Camera 2 (Vibratory Conveyor)
| Setting | Value | Difference from Camera 1 |
|---------|-------|--------------------------|
| `name` | `"camera2"` | Different identifier |
| `machine_topic` | `"vibratory_conveyor"` | Different MQTT topic |
| `storage_path` | `"/storage/camera2"` | Different storage path |
| `exposure_ms` | `0.2` | Faster exposure (0.2 vs 0.3) |
| `gain` | `2.0` | Lower gain (2.0 vs 4.0) |
| `wb_red_gain` | `1.01` | Different red balance (1.01 vs 0.94) |
| `hdr_gain_mode` | `0` | Different HDR mode (0 vs 2) |
*All other settings are identical to Camera 1*
## 🔄 Recent Changes
### MP4 Format Update
- **Added**: `video_format`, `video_codec`, `video_quality` fields
- **Changed**: Default recording format from AVI to MP4
- **Impact**: Requires service restart to take effect
### Current Status
- ✅ Configuration updated with MP4 settings
- ⚠️ Service restart required to apply changes
- 📁 Existing AVI files remain accessible
## 📝 Notes
1. **Target FPS = 0**: Both cameras use unlimited frame rate for maximum capture speed
2. **Auto Recording**: Both cameras automatically start recording when their respective machines turn on
3. **White Balance**: Manual white balance settings optimized for each camera's environment
4. **Storage**: Each camera has its own dedicated storage directory
5. **Video Quality**: Set to 95/100 for high-quality recordings with MP4 compression benefits
## 🔧 Configuration Management
To modify these settings:
1. Edit `config.json` file
2. Restart the camera service: `sudo ./start_system.sh`
3. Verify changes via API: `GET /cameras/{camera_name}/config`
For real-time settings (exposure, gain, fps), use the API without restart:
```bash
PUT /cameras/{camera_name}/config
```

View File

@@ -0,0 +1,211 @@
# 🎥 MP4 Video Format Update - Frontend Integration Guide
## Overview
The USDA Vision Camera System has been updated to record videos in **MP4 format** instead of AVI format for better streaming compatibility and smaller file sizes.
## 🔄 What Changed
### Video Format
- **Before**: AVI files with XVID codec (`.avi` extension)
- **After**: MP4 files with MPEG-4 codec (`.mp4` extension)
### File Extensions
- All new video recordings now use `.mp4` extension
- Existing `.avi` files remain accessible and functional
- File size reduction: ~40% smaller than equivalent AVI files
### API Response Updates
New fields added to camera configuration responses:
```json
{
"video_format": "mp4", // File format: "mp4" or "avi"
"video_codec": "mp4v", // Video codec: "mp4v", "XVID", "MJPG"
"video_quality": 95 // Quality: 0-100 (higher = better)
}
```
## 🌐 Frontend Impact
### 1. Video Player Compatibility
**✅ Better Browser Support**
- MP4 format has native support in all modern browsers
- No need for additional codecs or plugins
- Better mobile device compatibility (iOS/Android)
### 2. File Handling Updates
**File Extension Handling**
```javascript
// Update file extension checks
const isVideoFile = (filename) => {
return filename.endsWith('.mp4') || filename.endsWith('.avi');
};
// Video MIME type detection
const getVideoMimeType = (filename) => {
if (filename.endsWith('.mp4')) return 'video/mp4';
if (filename.endsWith('.avi')) return 'video/x-msvideo';
return 'video/mp4'; // default
};
```
### 3. Video Streaming
**Improved Streaming Performance**
```javascript
// MP4 files can be streamed directly without conversion
const videoUrl = `/api/videos/${videoId}/stream`;
// For HTML5 video element
<video controls>
<source src={videoUrl} type="video/mp4" />
Your browser does not support the video tag.
</video>
```
### 4. File Size Display
**Updated Size Expectations**
- MP4 files are ~40% smaller than equivalent AVI files
- Update any file size warnings or storage calculations
- Better compression means faster downloads and uploads
## 📡 API Changes
### Camera Configuration Endpoint
**GET** `/cameras/{camera_name}/config`
**New Response Fields:**
```json
{
"name": "camera1",
"machine_topic": "blower_separator",
"storage_path": "/storage/camera1",
"exposure_ms": 0.3,
"gain": 4.0,
"target_fps": 0,
"enabled": true,
"video_format": "mp4",
"video_codec": "mp4v",
"video_quality": 95,
"auto_start_recording_enabled": true,
"auto_recording_max_retries": 3,
"auto_recording_retry_delay_seconds": 2,
// ... other existing fields
}
```
### Video Listing Endpoints
**File Extension Updates**
- Video files in responses will now have `.mp4` extensions
- Existing `.avi` files will still appear in listings
- Filter by both extensions when needed
## 🔧 Configuration Options
### Video Format Settings
```json
{
"video_format": "mp4", // Options: "mp4", "avi"
"video_codec": "mp4v", // Options: "mp4v", "XVID", "MJPG"
"video_quality": 95 // Range: 0-100 (higher = better quality)
}
```
### Recommended Settings
- **Production**: `"mp4"` format, `"mp4v"` codec, `95` quality
- **Storage Optimized**: `"mp4"` format, `"mp4v"` codec, `85` quality
- **Legacy Mode**: `"avi"` format, `"XVID"` codec, `95` quality
## 🎯 Frontend Implementation Checklist
### ✅ Video Player Updates
- [ ] Verify HTML5 video player works with MP4 files
- [ ] Update video MIME type handling
- [ ] Test streaming performance with new format
### ✅ File Management
- [ ] Update file extension filters to include `.mp4`
- [ ] Modify file type detection logic
- [ ] Update download/upload handling for MP4 files
### ✅ UI/UX Updates
- [ ] Update file size expectations in UI
- [ ] Modify any format-specific icons or indicators
- [ ] Update help text or tooltips mentioning video formats
### ✅ Configuration Interface
- [ ] Add video format settings to camera config UI
- [ ] Include video quality slider/selector
- [ ] Add restart warning for video format changes
### ✅ Testing
- [ ] Test video playback with new MP4 files
- [ ] Verify backward compatibility with existing AVI files
- [ ] Test streaming performance and loading times
## 🔄 Backward Compatibility
### Existing AVI Files
- All existing `.avi` files remain fully functional
- No conversion or migration required
- Video player should handle both formats
### API Compatibility
- All existing API endpoints continue to work
- New fields are additive (won't break existing code)
- Default values provided for new configuration fields
## 📊 Performance Benefits
### File Size Reduction
```
Example 5-minute recording at 1280x1024:
- AVI/XVID: ~180 MB
- MP4/MPEG-4: ~108 MB (40% reduction)
```
### Streaming Improvements
- Faster initial load times
- Better progressive download support
- Reduced bandwidth usage
- Native browser optimization
### Storage Efficiency
- More recordings fit in same storage space
- Faster backup and transfer operations
- Reduced storage costs over time
## 🚨 Important Notes
### Restart Required
- Video format changes require camera service restart
- Mark video format settings as "restart required" in UI
- Provide clear user feedback about restart necessity
### Browser Compatibility
- MP4 format supported in all modern browsers
- Better mobile device support than AVI
- No additional plugins or codecs needed
### Quality Assurance
- Video quality maintained at 95/100 setting
- No visual degradation compared to AVI
- High bitrate ensures professional quality
## 🔗 Related Documentation
- [API Documentation](API_DOCUMENTATION.md) - Complete API reference
- [Camera Configuration API](api/CAMERA_CONFIG_API.md) - Detailed config options
- [Video Streaming Guide](VIDEO_STREAMING.md) - Streaming implementation
- [MP4 Conversion Summary](../MP4_CONVERSION_SUMMARY.md) - Technical details
## 📞 Support
If you encounter any issues with the MP4 format update:
1. **Video Playback Issues**: Check browser console for codec errors
2. **File Size Concerns**: Verify quality settings in camera config
3. **Streaming Problems**: Test with both MP4 and AVI files for comparison
4. **API Integration**: Refer to updated API documentation
The MP4 format provides better web compatibility and performance while maintaining the same high video quality required for the USDA vision system.

View File

@@ -0,0 +1,212 @@
# 🎉 USDA Vision Camera System - PROJECT COMPLETE!
## ✅ Final Status: READY FOR PRODUCTION
The USDA Vision Camera System has been successfully implemented, tested, and documented. All requirements have been met and the system is production-ready.
## 📋 Completed Requirements
### ✅ Core Functionality
- **MQTT Integration**: Dual topic listening for machine states
- **Automatic Recording**: Camera recording triggered by machine on/off states
- **GigE Camera Support**: Full integration with camera SDK library
- **Multi-threading**: Concurrent MQTT + camera monitoring + recording
- **File Management**: Timestamp-based naming in organized directories
### ✅ Advanced Features
- **REST API**: Complete FastAPI server with all endpoints
- **WebSocket Support**: Real-time updates for dashboard integration
- **Time Synchronization**: Atlanta, Georgia timezone with NTP sync
- **Storage Management**: File indexing, cleanup, and statistics
- **Comprehensive Logging**: Rotating logs with error tracking
- **Configuration System**: JSON-based configuration management
### ✅ Documentation & Testing
- **Complete README**: Installation, usage, API docs, troubleshooting
- **Test Suite**: Comprehensive system testing (`test_system.py`)
- **Time Verification**: Timezone and sync testing (`check_time.py`)
- **Startup Scripts**: Easy deployment with `start_system.sh`
- **Clean Repository**: Organized structure with proper .gitignore
## 🏗️ Final Project Structure
```
USDA-Vision-Cameras/
├── README.md # Complete documentation
├── main.py # System entry point
├── config.json # System configuration
├── requirements.txt # Python dependencies
├── pyproject.toml # UV package configuration
├── .gitignore # Git ignore rules
├── start_system.sh # Startup script
├── setup_timezone.sh # Time sync setup
├── test_system.py # System test suite
├── check_time.py # Time verification
├── test_timezone.py # Timezone testing
├── usda_vision_system/ # Main application
│ ├── core/ # Core functionality
│ ├── mqtt/ # MQTT integration
│ ├── camera/ # Camera management
│ ├── storage/ # File management
│ ├── api/ # REST API server
│ └── main.py # Application coordinator
├── camera_sdk/ # GigE camera SDK library
├── demos/ # Demo and example code
│ ├── cv_grab*.py # Camera SDK usage examples
│ └── mqtt_*.py # MQTT demo scripts
├── storage/ # Recording storage
│ ├── camera1/ # Camera 1 recordings
│ └── camera2/ # Camera 2 recordings
├── tests/ # Test files and legacy tests
├── notebooks/ # Jupyter notebooks
└── docs/ # Documentation files
```
## 🚀 How to Deploy
### 1. Clone and Setup
```bash
git clone https://github.com/your-username/USDA-Vision-Cameras.git
cd USDA-Vision-Cameras
uv sync
```
### 2. Configure System
```bash
# Edit config.json for your environment
# Set MQTT broker, camera settings, storage paths
```
### 3. Setup Time Sync
```bash
./setup_timezone.sh
```
### 4. Test System
```bash
python test_system.py
```
### 5. Start System
```bash
./start_system.sh
```
## 🌐 API Integration
### Dashboard Integration
```javascript
// React component example
const systemStatus = await fetch('http://localhost:8000/system/status');
const cameras = await fetch('http://localhost:8000/cameras');
// WebSocket for real-time updates
const ws = new WebSocket('ws://localhost:8000/ws');
ws.onmessage = (event) => {
const update = JSON.parse(event.data);
// Handle real-time system updates
};
```
### Manual Control
```bash
# Start recording manually
curl -X POST http://localhost:8000/cameras/camera1/start-recording
# Stop recording manually
curl -X POST http://localhost:8000/cameras/camera1/stop-recording
# Get system status
curl http://localhost:8000/system/status
```
## 📊 System Capabilities
### Discovered Hardware
- **2 GigE Cameras**: Blower-Yield-Cam, Cracker-Cam
- **Network Ready**: Cameras accessible at 192.168.1.165, 192.168.1.167
- **MQTT Ready**: Configured for broker at 192.168.1.110
### Recording Features
- **Automatic Start/Stop**: Based on MQTT machine states
- **Timezone Aware**: Atlanta time timestamps (EST/EDT)
- **Organized Storage**: Separate directories per camera
- **File Naming**: `camera1_recording_20250725_213000.avi`
- **Manual Control**: API endpoints for manual recording
### Monitoring Features
- **Real-time Status**: Camera and machine state monitoring
- **Health Checks**: Automatic system health verification
- **Performance Tracking**: Recording metrics and system stats
- **Error Handling**: Comprehensive error tracking and recovery
## 🔧 Maintenance
### Regular Tasks
- **Log Monitoring**: Check `usda_vision_system.log`
- **Storage Cleanup**: Automatic cleanup of old recordings
- **Time Sync**: Automatic NTP synchronization
- **Health Checks**: Built-in system monitoring
### Troubleshooting
- **Test Suite**: `python test_system.py`
- **Time Check**: `python check_time.py`
- **API Health**: `curl http://localhost:8000/health`
- **Debug Mode**: `python main.py --log-level DEBUG`
## 🎯 Production Readiness
### ✅ All Tests Passing
- System initialization: ✅
- Camera discovery: ✅ (2 cameras found)
- MQTT configuration: ✅
- Storage setup: ✅
- Time synchronization: ✅
- API endpoints: ✅
### ✅ Documentation Complete
- Installation guide: ✅
- Configuration reference: ✅
- API documentation: ✅
- Troubleshooting guide: ✅
- Integration examples: ✅
### ✅ Production Features
- Error handling: ✅
- Logging system: ✅
- Time synchronization: ✅
- Storage management: ✅
- API security: ✅
- Performance monitoring: ✅
## 🚀 Next Steps
The system is now ready for:
1. **Production Deployment**: Deploy on target hardware
2. **Dashboard Integration**: Connect to React + Supabase dashboard
3. **MQTT Configuration**: Connect to production MQTT broker
4. **Camera Calibration**: Fine-tune camera settings for production
5. **Monitoring Setup**: Configure production monitoring and alerts
## 📞 Support
For ongoing support:
- **Documentation**: Complete README.md with troubleshooting
- **Test Suite**: Comprehensive diagnostic tools
- **Logging**: Detailed system logs for debugging
- **API Health**: Built-in health check endpoints
---
**🎊 PROJECT STATUS: COMPLETE AND PRODUCTION-READY! 🎊**
The USDA Vision Camera System is fully implemented, tested, and documented. All original requirements have been met, and the system is ready for production deployment with your React dashboard integration.
**Key Achievements:**
- ✅ Dual MQTT topic monitoring
- ✅ Automatic camera recording
- ✅ Atlanta timezone synchronization
- ✅ Complete REST API
- ✅ Comprehensive documentation
- ✅ Production-ready deployment

View File

@@ -0,0 +1,276 @@
# 🚀 React Frontend Integration Guide - MP4 Update
## 🎯 Quick Summary for React Team
The camera system now records in **MP4 format** instead of AVI. This provides better web compatibility and smaller file sizes.
## 🔄 What You Need to Update
### 1. File Extension Handling
```javascript
// OLD: Only checked for .avi
const isVideoFile = (filename) => filename.endsWith('.avi');
// NEW: Check for both formats
const isVideoFile = (filename) => {
return filename.endsWith('.mp4') || filename.endsWith('.avi');
};
// Video MIME types
const getVideoMimeType = (filename) => {
if (filename.endsWith('.mp4')) return 'video/mp4';
if (filename.endsWith('.avi')) return 'video/x-msvideo';
return 'video/mp4'; // default for new files
};
```
### 2. Video Player Component
```jsx
// MP4 files work better with HTML5 video
const VideoPlayer = ({ videoUrl, filename }) => {
const mimeType = getVideoMimeType(filename);
return (
<video controls width="100%" height="auto">
<source src={videoUrl} type={mimeType} />
Your browser does not support the video tag.
</video>
);
};
```
### 3. Camera Configuration Interface
Add these new fields to your camera config forms:
```jsx
const CameraConfigForm = () => {
const [config, setConfig] = useState({
// ... existing fields
video_format: 'mp4', // 'mp4' or 'avi'
video_codec: 'mp4v', // 'mp4v', 'XVID', 'MJPG'
video_quality: 95 // 0-100
});
return (
<form>
{/* ... existing fields */}
<div className="video-settings">
<h3>Video Recording Settings</h3>
<select
value={config.video_format}
onChange={(e) => setConfig({...config, video_format: e.target.value})}
>
<option value="mp4">MP4 (Recommended)</option>
<option value="avi">AVI (Legacy)</option>
</select>
<select
value={config.video_codec}
onChange={(e) => setConfig({...config, video_codec: e.target.value})}
>
<option value="mp4v">MPEG-4 (mp4v)</option>
<option value="XVID">Xvid</option>
<option value="MJPG">Motion JPEG</option>
</select>
<input
type="range"
min="50"
max="100"
value={config.video_quality}
onChange={(e) => setConfig({...config, video_quality: parseInt(e.target.value)})}
/>
<label>Quality: {config.video_quality}%</label>
<div className="warning">
Video format changes require camera restart
</div>
</div>
</form>
);
};
```
## 📡 API Response Changes
### Camera Configuration Response
```json
{
"name": "camera1",
"machine_topic": "blower_separator",
"storage_path": "/storage/camera1",
"exposure_ms": 0.3,
"gain": 4.0,
"target_fps": 0,
"enabled": true,
"video_format": "mp4",
"video_codec": "mp4v",
"video_quality": 95,
"auto_start_recording_enabled": true,
"auto_recording_max_retries": 3,
"auto_recording_retry_delay_seconds": 2,
// ... other existing fields
}
```
### Video File Listings
```json
{
"videos": [
{
"file_id": "camera1_recording_20250804_143022.mp4",
"filename": "camera1_recording_20250804_143022.mp4",
"format": "mp4",
"file_size_bytes": 31457280,
"created_at": "2025-08-04T14:30:22"
}
]
}
```
## 🎨 UI/UX Improvements
### File Size Display
```javascript
// MP4 files are ~40% smaller
const formatFileSize = (bytes) => {
const mb = bytes / (1024 * 1024);
return `${mb.toFixed(1)} MB`;
};
// Show format in file listings
const FileListItem = ({ video }) => (
<div className="file-item">
<span className="filename">{video.filename}</span>
<span className={`format ${video.format}`}>
{video.format.toUpperCase()}
</span>
<span className="size">{formatFileSize(video.file_size_bytes)}</span>
</div>
);
```
### Format Indicators
```css
.format.mp4 {
background: #4CAF50;
color: white;
padding: 2px 6px;
border-radius: 3px;
font-size: 0.8em;
}
.format.avi {
background: #FF9800;
color: white;
padding: 2px 6px;
border-radius: 3px;
font-size: 0.8em;
}
```
## ⚡ Performance Benefits
### Streaming Improvements
- **Faster Loading**: MP4 files start playing sooner
- **Better Seeking**: More responsive video scrubbing
- **Mobile Friendly**: Better iOS/Android compatibility
- **Bandwidth Savings**: 40% smaller files = faster transfers
### Implementation Tips
```javascript
// Preload video metadata for better UX
const VideoThumbnail = ({ videoUrl }) => (
<video
preload="metadata"
poster={`${videoUrl}?t=1`} // Thumbnail at 1 second
onLoadedMetadata={(e) => {
console.log('Duration:', e.target.duration);
}}
>
<source src={videoUrl} type="video/mp4" />
</video>
);
```
## 🔧 Configuration Management
### Restart Warning Component
```jsx
const RestartWarning = ({ show }) => {
if (!show) return null;
return (
<div className="alert alert-warning">
<strong> Restart Required</strong>
<p>Video format changes require a camera service restart to take effect.</p>
<button onClick={handleRestart}>Restart Camera Service</button>
</div>
);
};
```
### Settings Validation
```javascript
const validateVideoSettings = (settings) => {
const errors = {};
if (!['mp4', 'avi'].includes(settings.video_format)) {
errors.video_format = 'Must be mp4 or avi';
}
if (!['mp4v', 'XVID', 'MJPG'].includes(settings.video_codec)) {
errors.video_codec = 'Invalid codec';
}
if (settings.video_quality < 50 || settings.video_quality > 100) {
errors.video_quality = 'Quality must be between 50-100';
}
return errors;
};
```
## 📱 Mobile Considerations
### Responsive Video Player
```jsx
const ResponsiveVideoPlayer = ({ videoUrl, filename }) => (
<div className="video-container">
<video
controls
playsInline // Important for iOS
preload="metadata"
style={{ width: '100%', height: 'auto' }}
>
<source src={videoUrl} type={getVideoMimeType(filename)} />
<p>Your browser doesn't support HTML5 video.</p>
</video>
</div>
);
```
## 🧪 Testing Checklist
- [ ] Video playback works with new MP4 files
- [ ] File extension filtering includes both .mp4 and .avi
- [ ] Camera configuration UI shows video format options
- [ ] Restart warning appears for video format changes
- [ ] File size displays are updated for smaller MP4 files
- [ ] Mobile video playback works correctly
- [ ] Video streaming performance is improved
- [ ] Backward compatibility with existing AVI files
## 📞 Support
If you encounter issues:
1. **Video won't play**: Check browser console for codec errors
2. **File size unexpected**: Verify quality settings in camera config
3. **Streaming slow**: Compare MP4 vs AVI performance
4. **Mobile issues**: Ensure `playsInline` attribute is set
The MP4 update provides significant improvements in web compatibility and performance while maintaining full backward compatibility with existing AVI files.

100
api/docs/README.md Normal file
View File

@@ -0,0 +1,100 @@
# USDA Vision Camera System - Documentation
This directory contains detailed documentation for the USDA Vision Camera System.
## Documentation Files
### 🚀 [API_DOCUMENTATION.md](API_DOCUMENTATION.md) **⭐ NEW**
**Complete API reference documentation** covering all endpoints, features, and recent enhancements:
- System status and health monitoring
- Camera management and configuration
- Recording control with dynamic settings
- Auto-recording management
- MQTT and machine status
- Storage and file management
- Camera recovery and diagnostics
- Live streaming capabilities
- WebSocket real-time updates
- Quick start examples and migration notes
### ⚡ [API_QUICK_REFERENCE.md](API_QUICK_REFERENCE.md) **⭐ NEW**
**Quick reference card** for the most commonly used API endpoints with curl examples and response formats.
### 📋 [PROJECT_COMPLETE.md](PROJECT_COMPLETE.md)
Complete project overview and final status documentation. Contains:
- Project completion status
- Final system architecture
- Deployment instructions
- Production readiness checklist
### 🎥 [MP4_FORMAT_UPDATE.md](MP4_FORMAT_UPDATE.md) **⭐ NEW**
**Frontend integration guide** for the MP4 video format update:
- Video format changes from AVI to MP4
- Frontend implementation checklist
- API response updates
- Performance benefits and browser compatibility
### 🚀 [REACT_INTEGRATION_GUIDE.md](REACT_INTEGRATION_GUIDE.md) **⭐ NEW**
**Quick reference for React developers** implementing the MP4 format changes:
- Code examples and components
- File handling updates
- Configuration interface
- Testing checklist
### 📋 [CURRENT_CONFIGURATION.md](CURRENT_CONFIGURATION.md) **⭐ NEW**
**Complete current system configuration reference**:
- Exact config.json structure with all current values
- Field-by-field documentation
- Camera-specific settings comparison
- MQTT topics and machine mappings
### 🎬 [VIDEO_STREAMING.md](VIDEO_STREAMING.md) **⭐ UPDATED**
**Complete video streaming module documentation**:
- Comprehensive API endpoint documentation
- Authentication and security information
- Error handling and troubleshooting
- Performance optimization guidelines
### 🤖 [AI_AGENT_VIDEO_INTEGRATION_GUIDE.md](AI_AGENT_VIDEO_INTEGRATION_GUIDE.md) **⭐ NEW**
**Complete integration guide for AI agents and external systems**:
- Step-by-step integration workflow
- Programming language examples (Python, JavaScript)
- Error handling and debugging strategies
- Performance optimization recommendations
### 🔧 [API_CHANGES_SUMMARY.md](API_CHANGES_SUMMARY.md)
Summary of API changes and enhancements made to the system.
### 📷 [CAMERA_RECOVERY_GUIDE.md](CAMERA_RECOVERY_GUIDE.md)
Guide for camera recovery procedures and troubleshooting camera-related issues.
### 📡 [MQTT_LOGGING_GUIDE.md](MQTT_LOGGING_GUIDE.md)
Comprehensive guide for MQTT logging configuration and troubleshooting.
## Main Documentation
The main system documentation is located in the root directory:
- **[../README.md](../README.md)** - Primary system documentation with installation, configuration, and usage instructions
## Additional Resources
### Demo Code
- **[../demos/](../demos/)** - Demo scripts and camera SDK examples
### Test Files
- **[../tests/](../tests/)** - Test scripts and legacy test files
### Jupyter Notebooks
- **[../notebooks/](../notebooks/)** - Interactive notebooks for system exploration and testing
## Quick Links
- [System Installation](../README.md#installation)
- [Configuration Guide](../README.md#configuration)
- [API Documentation](../README.md#api-reference)
- [Troubleshooting](../README.md#troubleshooting)
- [Camera SDK Examples](../demos/camera_sdk_examples/)
## Support
For technical support and questions, refer to the main [README.md](../README.md) troubleshooting section or check the system logs.

601
api/docs/VIDEO_STREAMING.md Normal file
View File

@@ -0,0 +1,601 @@
# 🎬 Video Streaming Module
The USDA Vision Camera System now includes a modular video streaming system that provides YouTube-like video playback capabilities for your React web application.
## 🌟 Features
- **Progressive Streaming** - True chunked streaming for web browsers (no download required)
- **HTTP Range Request Support** - Enables seeking and progressive download with 206 Partial Content
- **Native MP4 Support** - Direct streaming of MP4 files optimized for web playback
- **Memory Efficient** - 8KB chunked delivery, no large file loading into memory
- **Browser Compatible** - Works with HTML5 `<video>` tag in all modern browsers
- **Intelligent Caching** - Optimized streaming performance with byte-range caching
- **Thumbnail Generation** - Extract preview images from videos
- **Modular Architecture** - Clean separation of concerns
- **No Authentication Required** - Open access for internal network use
- **CORS Enabled** - Ready for web browser integration
## 🏗️ Architecture
The video module follows clean architecture principles:
```
usda_vision_system/video/
├── domain/ # Business logic (pure Python)
├── infrastructure/ # External dependencies (OpenCV, FFmpeg)
├── application/ # Use cases and orchestration
├── presentation/ # HTTP controllers and API routes
└── integration.py # Dependency injection and composition
```
## 🚀 API Endpoints
### List Videos
```http
GET /videos/
```
**Query Parameters:**
- `camera_name` (optional): Filter by camera name
- `start_date` (optional): Filter videos created after this date (ISO format: 2025-08-04T14:30:22)
- `end_date` (optional): Filter videos created before this date (ISO format: 2025-08-04T14:30:22)
- `limit` (optional): Maximum results (default: 50, max: 1000)
- `include_metadata` (optional): Include video metadata (default: false)
**Example Request:**
```bash
curl "http://localhost:8000/videos/?camera_name=camera1&include_metadata=true&limit=10"
```
**Response:**
```json
{
"videos": [
{
"file_id": "camera1_auto_blower_separator_20250804_143022.mp4",
"camera_name": "camera1",
"filename": "camera1_auto_blower_separator_20250804_143022.mp4",
"file_size_bytes": 31457280,
"format": "mp4",
"status": "completed",
"created_at": "2025-08-04T14:30:22",
"start_time": "2025-08-04T14:30:22",
"end_time": "2025-08-04T14:32:22",
"machine_trigger": "blower_separator",
"is_streamable": true,
"needs_conversion": false,
"metadata": {
"duration_seconds": 120.5,
"width": 1920,
"height": 1080,
"fps": 30.0,
"codec": "mp4v",
"bitrate": 5000000,
"aspect_ratio": 1.777
}
}
],
"total_count": 1
}
```
### Stream Video
```http
GET /videos/{file_id}/stream
```
**Headers:**
- `Range: bytes=0-1023` (optional): Request specific byte range for seeking
**Example Requests:**
```bash
# Stream entire video (progressive streaming)
curl http://localhost:8000/videos/camera1_auto_blower_separator_20250805_123329.mp4/stream
# Stream specific byte range (for seeking)
curl -H "Range: bytes=0-1023" \
http://localhost:8000/videos/camera1_auto_blower_separator_20250805_123329.mp4/stream
```
**Response Headers:**
- `Accept-Ranges: bytes`
- `Content-Length: {size}`
- `Content-Range: bytes {start}-{end}/{total}` (for range requests)
- `Cache-Control: public, max-age=3600`
- `Content-Type: video/mp4`
**Streaming Implementation:**
-**Progressive Streaming**: Uses FastAPI `StreamingResponse` with 8KB chunks
-**HTTP Range Requests**: Returns 206 Partial Content for seeking
-**Memory Efficient**: No large file loading, streams directly from disk
-**Browser Compatible**: Works with HTML5 `<video>` tag playback
-**Chunked Delivery**: Optimal 8KB chunk size for smooth playback
-**CORS Enabled**: Ready for web browser integration
**Response Status Codes:**
- `200 OK`: Full video streaming (progressive chunks)
- `206 Partial Content`: Range request successful
- `404 Not Found`: Video not found or not streamable
- `416 Range Not Satisfiable`: Invalid range request
### Get Video Info
```http
GET /videos/{file_id}
```
**Example Request:**
```bash
curl http://localhost:8000/videos/camera1_recording_20250804_143022.avi
```
**Response includes complete metadata:**
```json
{
"file_id": "camera1_recording_20250804_143022.avi",
"camera_name": "camera1",
"filename": "camera1_recording_20250804_143022.avi",
"file_size_bytes": 52428800,
"format": "avi",
"status": "completed",
"created_at": "2025-08-04T14:30:22",
"start_time": "2025-08-04T14:30:22",
"end_time": "2025-08-04T14:32:22",
"machine_trigger": "vibratory_conveyor",
"is_streamable": true,
"needs_conversion": true,
"metadata": {
"duration_seconds": 120.5,
"width": 1920,
"height": 1080,
"fps": 30.0,
"codec": "XVID",
"bitrate": 5000000,
"aspect_ratio": 1.777
}
}
```
### Get Thumbnail
```http
GET /videos/{file_id}/thumbnail?timestamp=5.0&width=320&height=240
```
**Query Parameters:**
- `timestamp` (optional): Time position in seconds to extract thumbnail from (default: 1.0)
- `width` (optional): Thumbnail width in pixels (default: 320)
- `height` (optional): Thumbnail height in pixels (default: 240)
**Example Request:**
```bash
curl "http://localhost:8000/videos/camera1_recording_20250804_143022.avi/thumbnail?timestamp=5.0&width=320&height=240" \
--output thumbnail.jpg
```
**Response**: JPEG image data with caching headers
- `Content-Type: image/jpeg`
- `Cache-Control: public, max-age=3600`
### Streaming Info
```http
GET /videos/{file_id}/info
```
**Example Request:**
```bash
curl http://localhost:8000/videos/camera1_recording_20250804_143022.avi/info
```
**Response**: Technical streaming details
```json
{
"file_id": "camera1_recording_20250804_143022.avi",
"file_size_bytes": 52428800,
"content_type": "video/x-msvideo",
"supports_range_requests": true,
"chunk_size_bytes": 262144
}
```
### Video Validation
```http
POST /videos/{file_id}/validate
```
**Example Request:**
```bash
curl -X POST http://localhost:8000/videos/camera1_recording_20250804_143022.avi/validate
```
**Response**: Validation status
```json
{
"file_id": "camera1_recording_20250804_143022.avi",
"is_valid": true
}
```
### Cache Management
```http
POST /videos/{file_id}/cache/invalidate
```
**Example Request:**
```bash
curl -X POST http://localhost:8000/videos/camera1_recording_20250804_143022.avi/cache/invalidate
```
**Response**: Cache invalidation status
```json
{
"file_id": "camera1_recording_20250804_143022.avi",
"cache_invalidated": true
}
```
### Admin: Cache Cleanup
```http
POST /admin/videos/cache/cleanup?max_size_mb=100
```
**Example Request:**
```bash
curl -X POST "http://localhost:8000/admin/videos/cache/cleanup?max_size_mb=100"
```
**Response**: Cache cleanup results
```json
{
"cache_cleaned": true,
"entries_removed": 15,
"max_size_mb": 100
}
```
## 🌐 React Integration
### Basic Video Player
```jsx
function VideoPlayer({ fileId }) {
return (
<video
controls
width="100%"
preload="metadata"
style={{ maxWidth: '800px' }}
>
<source
src={`${API_BASE_URL}/videos/${fileId}/stream`}
type="video/mp4"
/>
Your browser does not support video playback.
</video>
);
}
```
### Advanced Player with Thumbnail
```jsx
function VideoPlayerWithThumbnail({ fileId }) {
const [thumbnail, setThumbnail] = useState(null);
useEffect(() => {
fetch(`${API_BASE_URL}/videos/${fileId}/thumbnail`)
.then(response => response.blob())
.then(blob => setThumbnail(URL.createObjectURL(blob)));
}, [fileId]);
return (
<video controls width="100%" poster={thumbnail}>
<source
src={`${API_BASE_URL}/videos/${fileId}/stream`}
type="video/mp4"
/>
</video>
);
}
```
### Video List Component
```jsx
function VideoList({ cameraName }) {
const [videos, setVideos] = useState([]);
useEffect(() => {
const params = new URLSearchParams();
if (cameraName) params.append('camera_name', cameraName);
params.append('include_metadata', 'true');
fetch(`${API_BASE_URL}/videos/?${params}`)
.then(response => response.json())
.then(data => setVideos(data.videos));
}, [cameraName]);
return (
<div className="grid grid-cols-1 md:grid-cols-2 lg:grid-cols-3 gap-4">
{videos.map(video => (
<VideoCard key={video.file_id} video={video} />
))}
</div>
);
}
```
## 🔧 Configuration
The video module is automatically initialized when the API server starts. Configuration options:
```python
# In your API server initialization
video_module = create_video_module(
config=config,
storage_manager=storage_manager,
enable_caching=True, # Enable streaming cache
enable_conversion=True # Enable format conversion
)
```
### Configuration Parameters
- **`enable_caching`**: Enable/disable intelligent byte-range caching (default: True)
- **`cache_size_mb`**: Maximum cache size in MB (default: 100)
- **`cache_max_age_minutes`**: Cache entry expiration time (default: 30)
- **`enable_conversion`**: Enable/disable automatic AVI to MP4 conversion (default: True)
- **`conversion_quality`**: Video conversion quality: "low", "medium", "high" (default: "medium")
### System Requirements
- **OpenCV**: Required for thumbnail generation and metadata extraction
- **FFmpeg**: Optional, for video format conversion (graceful fallback if not available)
- **Storage**: Sufficient disk space for video files and cache
- **Memory**: Recommended 2GB+ RAM for caching and video processing
## 🔐 Authentication & Security
### Current Security Model
**⚠️ IMPORTANT: No authentication is currently implemented.**
- **Open Access**: All video streaming endpoints are publicly accessible
- **CORS Policy**: Currently set to allow all origins (`allow_origins=["*"]`)
- **Network Security**: Designed for internal network use only
- **No API Keys**: No authentication tokens or API keys required
- **No Rate Limiting**: No request rate limiting currently implemented
### Security Considerations for Production
#### For Internal Network Deployment
```bash
# Current configuration is suitable for:
# - Internal corporate networks
# - Isolated network segments
# - Development and testing environments
```
#### For External Access (Recommendations)
If you need to expose the video streaming API externally, consider implementing:
1. **Authentication Layer**
```python
# Example: Add JWT authentication
from fastapi import Depends, HTTPException
from fastapi.security import HTTPBearer
security = HTTPBearer()
async def verify_token(token: str = Depends(security)):
# Implement token verification logic
pass
```
2. **CORS Configuration**
```python
# Restrict CORS to specific domains
app.add_middleware(
CORSMiddleware,
allow_origins=["https://yourdomain.com"],
allow_credentials=True,
allow_methods=["GET", "POST"],
allow_headers=["*"]
)
```
3. **Rate Limiting**
```python
# Example: Add rate limiting
from slowapi import Limiter
limiter = Limiter(key_func=get_remote_address)
@app.get("/videos/")
@limiter.limit("10/minute")
async def list_videos():
pass
```
4. **Network Security**
- Use HTTPS/TLS for encrypted communication
- Implement firewall rules to restrict access
- Consider VPN access for remote users
- Use reverse proxy (nginx) for additional security
### Access Control Summary
```
┌─────────────────────────────────────────────────────────────┐
│ Current Access Model │
├─────────────────────────────────────────────────────────────┤
│ Authentication: ❌ None │
│ Authorization: ❌ None │
│ CORS: ✅ Enabled (all origins) │
│ Rate Limiting: ❌ None │
│ HTTPS: ⚠️ Depends on deployment │
│ Network Security: ⚠️ Firewall/VPN recommended │
└─────────────────────────────────────────────────────────────┘
```
## 📊 Performance
- **Caching**: Intelligent byte-range caching reduces disk I/O
- **Adaptive Chunking**: Optimal chunk sizes based on file size
- **Range Requests**: Only download needed portions
- **Format Conversion**: Automatic conversion to web-compatible formats
## 🛠️ Service Management
### Restart Service
```bash
sudo systemctl restart usda-vision-camera
```
### Check Status
```bash
# Check video module status
curl http://localhost:8000/system/video-module
# Check available videos
curl http://localhost:8000/videos/
```
### Logs
```bash
sudo journalctl -u usda-vision-camera -f
```
## 🧪 Testing
Run the video module tests:
```bash
cd /home/alireza/USDA-vision-cameras
PYTHONPATH=/home/alireza/USDA-vision-cameras python tests/test_video_module.py
```
## 🔍 Troubleshooting
### Video Not Playing
1. **Check if file exists**: `GET /videos/{file_id}`
```bash
curl http://localhost:8000/videos/camera1_recording_20250804_143022.avi
```
2. **Verify streaming info**: `GET /videos/{file_id}/info`
```bash
curl http://localhost:8000/videos/camera1_recording_20250804_143022.avi/info
```
3. **Test direct stream**: `GET /videos/{file_id}/stream`
```bash
curl -I http://localhost:8000/videos/camera1_recording_20250804_143022.avi/stream
```
4. **Validate video file**: `POST /videos/{file_id}/validate`
```bash
curl -X POST http://localhost:8000/videos/camera1_recording_20250804_143022.avi/validate
```
### Performance Issues
1. **Check cache status**: Clean up cache if needed
```bash
curl -X POST "http://localhost:8000/admin/videos/cache/cleanup?max_size_mb=100"
```
2. **Monitor system resources**: Check CPU, memory, and disk usage
3. **Adjust cache size**: Modify configuration parameters
4. **Invalidate specific cache**: For updated files
```bash
curl -X POST http://localhost:8000/videos/{file_id}/cache/invalidate
```
### Format Issues
- **AVI files**: Automatically converted to MP4 for web compatibility
- **Conversion requires FFmpeg**: Optional dependency with graceful fallback
- **Supported formats**: AVI (with conversion), MP4 (native), WebM (native)
### Common HTTP Status Codes
- **200**: Success - Video streamed successfully
- **206**: Partial Content - Range request successful
- **404**: Not Found - Video file doesn't exist or isn't streamable
- **416**: Range Not Satisfiable - Invalid range request
- **500**: Internal Server Error - Failed to read video data or generate thumbnail
### Browser Compatibility
- **Chrome/Chromium**: Full support for MP4 and range requests
- **Firefox**: Full support for MP4 and range requests
- **Safari**: Full support for MP4 and range requests
- **Edge**: Full support for MP4 and range requests
- **Mobile browsers**: Generally good support for MP4 streaming
### Error Scenarios and Solutions
#### Video File Issues
```bash
# Problem: Video not found (404)
curl http://localhost:8000/videos/nonexistent_video.mp4
# Response: {"detail": "Video nonexistent_video.mp4 not found"}
# Solution: Verify file_id exists using list endpoint
# Problem: Video not streamable
curl http://localhost:8000/videos/corrupted_video.avi/stream
# Response: {"detail": "Video corrupted_video.avi not found or not streamable"}
# Solution: Use validation endpoint to check file integrity
```
#### Range Request Issues
```bash
# Problem: Invalid range request (416)
curl -H "Range: bytes=999999999-" http://localhost:8000/videos/small_video.mp4/stream
# Response: {"detail": "Invalid range request: Range exceeds file size"}
# Solution: Check file size first using /info endpoint
# Problem: Malformed range header
curl -H "Range: invalid-range" http://localhost:8000/videos/video.mp4/stream
# Response: {"detail": "Invalid range request: Malformed range header"}
# Solution: Use proper range format: "bytes=start-end"
```
#### Thumbnail Generation Issues
```bash
# Problem: Thumbnail generation failed (404)
curl http://localhost:8000/videos/audio_only.mp4/thumbnail
# Response: {"detail": "Could not generate thumbnail for audio_only.mp4"}
# Solution: Verify video has visual content and is not audio-only
# Problem: Invalid timestamp
curl "http://localhost:8000/videos/short_video.mp4/thumbnail?timestamp=999"
# Response: Returns thumbnail from last available frame
# Solution: Check video duration first using metadata
```
#### System Resource Issues
```bash
# Problem: Cache full or system overloaded (500)
curl http://localhost:8000/videos/large_video.mp4/stream
# Response: {"detail": "Failed to read video data"}
# Solution: Clean cache or wait for system resources
curl -X POST "http://localhost:8000/admin/videos/cache/cleanup?max_size_mb=50"
```
### Debugging Workflow
```bash
# Step 1: Check system health
curl http://localhost:8000/health
# Step 2: Verify video exists and get info
curl http://localhost:8000/videos/your_video_id
# Step 3: Check streaming capabilities
curl http://localhost:8000/videos/your_video_id/info
# Step 4: Validate video file
curl -X POST http://localhost:8000/videos/your_video_id/validate
# Step 5: Test basic streaming
curl -I http://localhost:8000/videos/your_video_id/stream
# Step 6: Test range request
curl -I -H "Range: bytes=0-1023" http://localhost:8000/videos/your_video_id/stream
```
### Performance Monitoring
```bash
# Monitor cache usage
curl -X POST "http://localhost:8000/admin/videos/cache/cleanup?max_size_mb=100"
# Check system resources
curl http://localhost:8000/system/status
# Monitor video module status
curl http://localhost:8000/videos/ | jq '.total_count'
```
## 🎯 Next Steps
1. **Restart the usda-vision-camera service** to enable video streaming
2. **Test the endpoints** using curl or your browser
3. **Integrate with your React app** using the provided examples
4. **Monitor performance** and adjust caching as needed
The video streaming system is now ready for production use! 🚀

View File

@@ -0,0 +1,302 @@
# 🤖 Web AI Agent - Video Integration Guide
This guide provides the essential information for integrating USDA Vision Camera video streaming into your web application.
## 🎯 Quick Start
### Video Streaming Status: ✅ READY
- **Progressive streaming implemented** - Videos play in browsers (no download)
- **86 MP4 files available** - All properly indexed and streamable
- **HTTP range requests supported** - Seeking and progressive playback work
- **Memory efficient** - 8KB chunked delivery
## 🚀 API Endpoints
### Base URL
```
http://localhost:8000
```
### 1. List Available Videos
```http
GET /videos/?camera_name={camera}&limit={limit}
```
**Example:**
```bash
curl "http://localhost:8000/videos/?camera_name=camera1&limit=10"
```
**Response:**
```json
{
"videos": [
{
"file_id": "camera1_auto_blower_separator_20250805_123329.mp4",
"camera_name": "camera1",
"file_size_bytes": 1072014489,
"format": "mp4",
"status": "completed",
"is_streamable": true,
"created_at": "2025-08-05T12:43:12.631210"
}
],
"total_count": 1
}
```
### 2. Stream Video (Progressive)
```http
GET /videos/{file_id}/stream
```
**Example:**
```bash
curl "http://localhost:8000/videos/camera1_auto_blower_separator_20250805_123329.mp4/stream"
```
**Features:**
- ✅ Progressive streaming (8KB chunks)
- ✅ HTTP range requests (206 Partial Content)
- ✅ Browser compatible (HTML5 video)
- ✅ Seeking support
- ✅ No authentication required
### 3. Get Video Thumbnail
```http
GET /videos/{file_id}/thumbnail?timestamp={seconds}&width={px}&height={px}
```
**Example:**
```bash
curl "http://localhost:8000/videos/camera1_auto_blower_separator_20250805_123329.mp4/thumbnail?timestamp=5.0&width=320&height=240"
```
## 🌐 Web Integration
### HTML5 Video Player
```html
<video controls width="100%" preload="metadata">
<source src="http://localhost:8000/videos/{file_id}/stream" type="video/mp4">
Your browser does not support video playback.
</video>
```
### React Component
```jsx
function VideoPlayer({ fileId, width = "100%" }) {
const streamUrl = `http://localhost:8000/videos/${fileId}/stream`;
const thumbnailUrl = `http://localhost:8000/videos/${fileId}/thumbnail`;
return (
<video
controls
width={width}
preload="metadata"
poster={thumbnailUrl}
style={{ maxWidth: '800px', borderRadius: '8px' }}
>
<source src={streamUrl} type="video/mp4" />
Your browser does not support video playback.
</video>
);
}
```
### Video List Component
```jsx
function VideoList({ cameraName = null, limit = 20 }) {
const [videos, setVideos] = useState([]);
const [loading, setLoading] = useState(true);
useEffect(() => {
const params = new URLSearchParams();
if (cameraName) params.append('camera_name', cameraName);
params.append('limit', limit.toString());
fetch(`http://localhost:8000/videos/?${params}`)
.then(response => response.json())
.then(data => {
// Filter only streamable MP4 videos
const streamableVideos = data.videos.filter(
v => v.format === 'mp4' && v.is_streamable
);
setVideos(streamableVideos);
setLoading(false);
})
.catch(error => {
console.error('Error loading videos:', error);
setLoading(false);
});
}, [cameraName, limit]);
if (loading) return <div>Loading videos...</div>;
return (
<div className="video-grid">
{videos.map(video => (
<div key={video.file_id} className="video-card">
<h3>{video.file_id}</h3>
<p>Camera: {video.camera_name}</p>
<p>Size: {(video.file_size_bytes / 1024 / 1024).toFixed(1)} MB</p>
<VideoPlayer fileId={video.file_id} width="100%" />
</div>
))}
</div>
);
}
```
## 📊 Available Data
### Current Video Inventory
- **Total Videos**: 161 files
- **MP4 Files**: 86 (all streamable ✅)
- **AVI Files**: 75 (legacy format, not prioritized)
- **Cameras**: camera1, camera2
- **Date Range**: July 29 - August 5, 2025
### Video File Naming Convention
```
{camera}_{trigger}_{machine}_{YYYYMMDD}_{HHMMSS}.mp4
```
**Examples:**
- `camera1_auto_blower_separator_20250805_123329.mp4`
- `camera2_auto_vibratory_conveyor_20250805_123042.mp4`
- `20250804_161305_manual_camera1_2025-08-04T20-13-09-634Z.mp4`
### Machine Triggers
- `auto_blower_separator` - Automatic recording triggered by blower separator
- `auto_vibratory_conveyor` - Automatic recording triggered by vibratory conveyor
- `manual` - Manual recording initiated by user
## 🔧 Technical Details
### Streaming Implementation
- **Method**: FastAPI `StreamingResponse` with async generators
- **Chunk Size**: 8KB for optimal performance
- **Range Requests**: Full HTTP/1.1 range request support
- **Status Codes**: 200 (full), 206 (partial), 404 (not found)
- **CORS**: Enabled for all origins
- **Caching**: Server-side byte-range caching
### Browser Compatibility
- ✅ Chrome/Chromium
- ✅ Firefox
- ✅ Safari
- ✅ Edge
- ✅ Mobile browsers
### Performance Characteristics
- **Memory Usage**: Low (8KB chunks, no large file loading)
- **Seeking**: Instant (HTTP range requests)
- **Startup Time**: Fast (metadata preload)
- **Bandwidth**: Adaptive (only downloads viewed portions)
## 🛠️ Error Handling
### Common Scenarios
```javascript
// Check if video is streamable
const checkVideo = async (fileId) => {
try {
const response = await fetch(`http://localhost:8000/videos/${fileId}`);
const video = await response.json();
if (!video.is_streamable) {
console.warn(`Video ${fileId} is not streamable`);
return false;
}
return true;
} catch (error) {
console.error(`Error checking video ${fileId}:`, error);
return false;
}
};
// Handle video loading errors
const VideoPlayerWithErrorHandling = ({ fileId }) => {
const [error, setError] = useState(null);
const handleError = (e) => {
console.error('Video playback error:', e);
setError('Failed to load video. Please try again.');
};
if (error) {
return <div className="error"> {error}</div>;
}
return (
<video
controls
onError={handleError}
src={`http://localhost:8000/videos/${fileId}/stream`}
/>
);
};
```
### HTTP Status Codes
- `200 OK` - Video streaming successfully
- `206 Partial Content` - Range request successful
- `404 Not Found` - Video not found or not streamable
- `416 Range Not Satisfiable` - Invalid range request
- `500 Internal Server Error` - Server error reading video
## 🔐 Security Notes
### Current Configuration
- **Authentication**: None (open access)
- **CORS**: Enabled for all origins
- **Network**: Designed for internal use
- **HTTPS**: Not required (HTTP works)
### For Production Use
Consider implementing:
- Authentication/authorization
- Rate limiting
- HTTPS/TLS encryption
- Network access controls
## 🧪 Testing
### Quick Test
```bash
# Test video listing
curl "http://localhost:8000/videos/?limit=5"
# Test video streaming
curl -I "http://localhost:8000/videos/camera1_auto_blower_separator_20250805_123329.mp4/stream"
# Test range request
curl -H "Range: bytes=0-1023" "http://localhost:8000/videos/camera1_auto_blower_separator_20250805_123329.mp4/stream" -o test_chunk.mp4
```
### Browser Test
Open: `file:///home/alireza/USDA-vision-cameras/test_video_streaming.html`
## 📞 Support
### Service Management
```bash
# Restart video service
sudo systemctl restart usda-vision-camera
# Check service status
sudo systemctl status usda-vision-camera
# View logs
sudo journalctl -u usda-vision-camera -f
```
### Health Check
```bash
curl http://localhost:8000/health
```
---
**✅ Ready for Integration**: The video streaming system is fully operational and ready for web application integration. All MP4 files are streamable with progressive playback support.

View File

@@ -0,0 +1,521 @@
# 🎛️ Camera Configuration API Guide
This guide explains how to configure camera settings via API endpoints, including all the advanced settings from your config.json.
> **Note**: This document is part of the comprehensive [USDA Vision Camera System API Documentation](../API_DOCUMENTATION.md). For complete API reference, see the main documentation.
## 📋 Configuration Categories
### ✅ **Real-time Configurable (No Restart Required)**
These settings can be changed while the camera is active:
- **Basic**: `exposure_ms`, `gain`, `target_fps`
- **Image Quality**: `sharpness`, `contrast`, `saturation`, `gamma`
- **Color**: `auto_white_balance`, `color_temperature_preset`
- **White Balance**: `wb_red_gain`, `wb_green_gain`, `wb_blue_gain`
- **Advanced**: `anti_flicker_enabled`, `light_frequency`
- **HDR**: `hdr_enabled`, `hdr_gain_mode`
### ⚠️ **Restart Required**
These settings require camera restart to take effect:
- **Noise Reduction**: `noise_filter_enabled`, `denoise_3d_enabled`
- **Video Recording**: `video_format`, `video_codec`, `video_quality`
- **System**: `machine_topic`, `storage_path`, `enabled`, `bit_depth`
### 🔒 **Read-Only Fields**
These fields are returned in the response but cannot be modified via the API:
- **System Info**: `name`, `machine_topic`, `storage_path`, `enabled`
- **Auto-Recording**: `auto_start_recording_enabled`, `auto_recording_max_retries`, `auto_recording_retry_delay_seconds`
## 🔌 API Endpoints
### 1. Get Camera Configuration
```http
GET /cameras/{camera_name}/config
```
**Response:**
```json
{
"name": "camera1",
"machine_topic": "blower_separator",
"storage_path": "/storage/camera1",
"exposure_ms": 0.3,
"gain": 4.0,
"target_fps": 0,
"enabled": true,
"video_format": "mp4",
"video_codec": "mp4v",
"video_quality": 95,
"auto_start_recording_enabled": true,
"auto_recording_max_retries": 3,
"auto_recording_retry_delay_seconds": 2,
"contrast": 100,
"saturation": 100,
"gamma": 100,
"noise_filter_enabled": false,
"denoise_3d_enabled": false,
"auto_white_balance": false,
"color_temperature_preset": 0,
"wb_red_gain": 0.94,
"wb_green_gain": 1.0,
"wb_blue_gain": 0.87,
"anti_flicker_enabled": false,
"light_frequency": 0,
"bit_depth": 8,
"hdr_enabled": false,
"hdr_gain_mode": 2
}
```
### 2. Update Camera Configuration
```http
PUT /cameras/{camera_name}/config
Content-Type: application/json
```
**Request Body (all fields optional):**
```json
{
"exposure_ms": 2.0,
"gain": 4.0,
"target_fps": 10.0,
"sharpness": 150,
"contrast": 120,
"saturation": 110,
"gamma": 90,
"noise_filter_enabled": true,
"denoise_3d_enabled": false,
"auto_white_balance": false,
"color_temperature_preset": 1,
"wb_red_gain": 1.2,
"wb_green_gain": 1.0,
"wb_blue_gain": 0.8,
"anti_flicker_enabled": true,
"light_frequency": 1,
"hdr_enabled": false,
"hdr_gain_mode": 0
}
```
**Response:**
```json
{
"success": true,
"message": "Camera camera1 configuration updated",
"updated_settings": ["exposure_ms", "gain", "sharpness", "wb_red_gain"]
}
```
### 3. Apply Configuration (Restart Camera)
```http
POST /cameras/{camera_name}/apply-config
```
**Response:**
```json
{
"success": true,
"message": "Configuration applied to camera camera1"
}
```
## 📊 Setting Ranges and Descriptions
### System Settings
| Setting | Values | Default | Description |
|---------|--------|---------|-------------|
| `name` | string | - | Camera identifier (read-only) |
| `machine_topic` | string | - | MQTT topic for machine state (read-only) |
| `storage_path` | string | - | Video storage directory (read-only) |
| `enabled` | true/false | true | Camera enabled status (read-only) |
### Auto-Recording Settings
| Setting | Range | Default | Description |
|---------|-------|---------|-------------|
| `auto_start_recording_enabled` | true/false | true | Enable automatic recording on machine state changes (read-only) |
| `auto_recording_max_retries` | 1-10 | 3 | Maximum retry attempts for failed recordings (read-only) |
| `auto_recording_retry_delay_seconds` | 1-30 | 2 | Delay between retry attempts in seconds (read-only) |
### Basic Settings
| Setting | Range | Default | Description |
|---------|-------|---------|-------------|
| `exposure_ms` | 0.1 - 1000.0 | 1.0 | Exposure time in milliseconds |
| `gain` | 0.0 - 20.0 | 3.5 | Camera gain multiplier |
| `target_fps` | 0.0 - 120.0 | 0 | Target FPS (0 = maximum) |
### Image Quality Settings
| Setting | Range | Default | Description |
|---------|-------|---------|-------------|
| `sharpness` | 0 - 200 | 100 | Image sharpness (100 = no sharpening) |
| `contrast` | 0 - 200 | 100 | Image contrast (100 = normal) |
| `saturation` | 0 - 200 | 100 | Color saturation (color cameras only) |
| `gamma` | 0 - 300 | 100 | Gamma correction (100 = normal) |
### Color Settings
| Setting | Values | Default | Description |
|---------|--------|---------|-------------|
| `auto_white_balance` | true/false | true | Automatic white balance |
| `color_temperature_preset` | 0-10 | 0 | Color temperature preset (0=auto) |
### Manual White Balance RGB Gains
| Setting | Range | Default | Description |
|---------|-------|---------|-------------|
| `wb_red_gain` | 0.0 - 3.99 | 1.0 | Red channel gain for manual white balance |
| `wb_green_gain` | 0.0 - 3.99 | 1.0 | Green channel gain for manual white balance |
| `wb_blue_gain` | 0.0 - 3.99 | 1.0 | Blue channel gain for manual white balance |
### Advanced Settings
| Setting | Values | Default | Description |
|---------|--------|---------|-------------|
| `anti_flicker_enabled` | true/false | true | Reduce artificial lighting flicker |
| `light_frequency` | 0/1 | 1 | Light frequency (0=50Hz, 1=60Hz) |
| `noise_filter_enabled` | true/false | true | Basic noise filtering |
| `denoise_3d_enabled` | true/false | false | Advanced 3D denoising |
### HDR Settings
| Setting | Values | Default | Description |
|---------|--------|---------|-------------|
| `hdr_enabled` | true/false | false | High Dynamic Range |
| `hdr_gain_mode` | 0-3 | 0 | HDR processing mode |
## 🚀 Usage Examples
### Example 1: Adjust Exposure and Gain
```bash
curl -X PUT http://localhost:8000/cameras/camera1/config \
-H "Content-Type: application/json" \
-d '{
"exposure_ms": 1.5,
"gain": 4.0
}'
```
### Example 2: Improve Image Quality
```bash
curl -X PUT http://localhost:8000/cameras/camera1/config \
-H "Content-Type: application/json" \
-d '{
"sharpness": 150,
"contrast": 120,
"gamma": 90
}'
```
### Example 3: Configure for Indoor Lighting
```bash
curl -X PUT http://localhost:8000/cameras/camera1/config \
-H "Content-Type: application/json" \
-d '{
"anti_flicker_enabled": true,
"light_frequency": 1,
"auto_white_balance": false,
"color_temperature_preset": 2
}'
```
### Example 4: Enable HDR Mode
```bash
curl -X PUT http://localhost:8000/cameras/camera1/config \
-H "Content-Type: application/json" \
-d '{
"hdr_enabled": true,
"hdr_gain_mode": 1
}'
```
## ⚛️ React Integration Examples
### Camera Configuration Component
```jsx
import React, { useState, useEffect } from 'react';
const CameraConfig = ({ cameraName, apiBaseUrl = 'http://localhost:8000' }) => {
const [config, setConfig] = useState(null);
const [loading, setLoading] = useState(false);
const [error, setError] = useState(null);
// Load current configuration
useEffect(() => {
fetchConfig();
}, [cameraName]);
const fetchConfig = async () => {
try {
const response = await fetch(`${apiBaseUrl}/cameras/${cameraName}/config`);
if (response.ok) {
const data = await response.json();
setConfig(data);
} else {
setError('Failed to load configuration');
}
} catch (err) {
setError(`Error: ${err.message}`);
}
};
const updateConfig = async (updates) => {
setLoading(true);
try {
const response = await fetch(`${apiBaseUrl}/cameras/${cameraName}/config`, {
method: 'PUT',
headers: { 'Content-Type': 'application/json' },
body: JSON.stringify(updates)
});
if (response.ok) {
const result = await response.json();
console.log('Updated settings:', result.updated_settings);
await fetchConfig(); // Reload configuration
} else {
const error = await response.json();
setError(error.detail || 'Update failed');
}
} catch (err) {
setError(`Error: ${err.message}`);
} finally {
setLoading(false);
}
};
const handleSliderChange = (setting, value) => {
updateConfig({ [setting]: value });
};
if (!config) return <div>Loading configuration...</div>;
return (
<div className="camera-config">
<h3>Camera Configuration: {cameraName}</h3>
{/* System Information (Read-Only) */}
<div className="config-section">
<h4>System Information</h4>
<div className="info-grid">
<div><strong>Name:</strong> {config.name}</div>
<div><strong>Machine Topic:</strong> {config.machine_topic}</div>
<div><strong>Storage Path:</strong> {config.storage_path}</div>
<div><strong>Enabled:</strong> {config.enabled ? 'Yes' : 'No'}</div>
<div><strong>Auto Recording:</strong> {config.auto_start_recording_enabled ? 'Enabled' : 'Disabled'}</div>
<div><strong>Max Retries:</strong> {config.auto_recording_max_retries}</div>
<div><strong>Retry Delay:</strong> {config.auto_recording_retry_delay_seconds}s</div>
</div>
</div>
{/* Basic Settings */}
<div className="config-section">
<h4>Basic Settings</h4>
<div className="setting">
<label>Exposure (ms): {config.exposure_ms}</label>
<input
type="range"
min="0.1"
max="10"
step="0.1"
value={config.exposure_ms}
onChange={(e) => handleSliderChange('exposure_ms', parseFloat(e.target.value))}
/>
</div>
<div className="setting">
<label>Gain: {config.gain}</label>
<input
type="range"
min="0"
max="10"
step="0.1"
value={config.gain}
onChange={(e) => handleSliderChange('gain', parseFloat(e.target.value))}
/>
</div>
<div className="setting">
<label>Target FPS: {config.target_fps}</label>
<input
type="range"
min="0"
max="30"
step="1"
value={config.target_fps}
onChange={(e) => handleSliderChange('target_fps', parseInt(e.target.value))}
/>
</div>
</div>
{/* Image Quality Settings */}
<div className="config-section">
<h4>Image Quality</h4>
<div className="setting">
<label>Sharpness: {config.sharpness}</label>
<input
type="range"
min="0"
max="200"
value={config.sharpness}
onChange={(e) => handleSliderChange('sharpness', parseInt(e.target.value))}
/>
</div>
<div className="setting">
<label>Contrast: {config.contrast}</label>
<input
type="range"
min="0"
max="200"
value={config.contrast}
onChange={(e) => handleSliderChange('contrast', parseInt(e.target.value))}
/>
</div>
<div className="setting">
<label>Gamma: {config.gamma}</label>
<input
type="range"
min="0"
max="300"
value={config.gamma}
onChange={(e) => handleSliderChange('gamma', parseInt(e.target.value))}
/>
</div>
</div>
{/* White Balance RGB Gains */}
<div className="config-section">
<h4>White Balance RGB Gains</h4>
<div className="setting">
<label>Red Gain: {config.wb_red_gain}</label>
<input
type="range"
min="0"
max="3.99"
step="0.01"
value={config.wb_red_gain}
onChange={(e) => handleSliderChange('wb_red_gain', parseFloat(e.target.value))}
/>
</div>
<div className="setting">
<label>Green Gain: {config.wb_green_gain}</label>
<input
type="range"
min="0"
max="3.99"
step="0.01"
value={config.wb_green_gain}
onChange={(e) => handleSliderChange('wb_green_gain', parseFloat(e.target.value))}
/>
</div>
<div className="setting">
<label>Blue Gain: {config.wb_blue_gain}</label>
<input
type="range"
min="0"
max="3.99"
step="0.01"
value={config.wb_blue_gain}
onChange={(e) => handleSliderChange('wb_blue_gain', parseFloat(e.target.value))}
/>
</div>
</div>
{/* Advanced Settings */}
<div className="config-section">
<h4>Advanced Settings</h4>
<div className="setting">
<label>
<input
type="checkbox"
checked={config.anti_flicker_enabled}
onChange={(e) => updateConfig({ anti_flicker_enabled: e.target.checked })}
/>
Anti-flicker Enabled
</label>
</div>
<div className="setting">
<label>
<input
type="checkbox"
checked={config.auto_white_balance}
onChange={(e) => updateConfig({ auto_white_balance: e.target.checked })}
/>
Auto White Balance
</label>
</div>
<div className="setting">
<label>
<input
type="checkbox"
checked={config.hdr_enabled}
onChange={(e) => updateConfig({ hdr_enabled: e.target.checked })}
/>
HDR Enabled
</label>
</div>
</div>
{error && (
<div className="error" style={{ color: 'red', marginTop: '10px' }}>
{error}
</div>
)}
{loading && <div>Updating configuration...</div>}
</div>
);
};
export default CameraConfig;
```
## 🔄 Configuration Workflow
### 1. Real-time Adjustments
For settings that don't require restart:
```bash
# Update settings
curl -X PUT /cameras/camera1/config -d '{"exposure_ms": 2.0}'
# Settings take effect immediately
# Continue recording/streaming without interruption
```
### 2. Settings Requiring Restart
For noise reduction and system settings:
```bash
# Update settings
curl -X PUT /cameras/camera1/config -d '{"noise_filter_enabled": false}'
# Apply configuration (restarts camera)
curl -X POST /cameras/camera1/apply-config
# Camera reinitializes with new settings
```
## 🚨 Important Notes
### Camera State During Updates
- **Real-time settings**: Applied immediately, no interruption
- **Restart-required settings**: Saved to config, applied on next restart
- **Recording**: Continues during real-time updates
- **Streaming**: Continues during real-time updates
### Error Handling
- Invalid ranges return HTTP 422 with validation errors
- Camera not found returns HTTP 404
- SDK errors are logged and return HTTP 500
### Performance Impact
- **Image quality settings**: Minimal performance impact
- **Noise reduction**: May reduce FPS when enabled
- **HDR**: Significant processing overhead when enabled
This comprehensive API allows you to control all camera settings programmatically, making it perfect for integration with React dashboards or automated optimization systems!

View File

@@ -0,0 +1,127 @@
# Blower Camera (Camera1) Configuration
This document describes the default configuration for the blower camera (Camera1) based on the GigE camera settings from the dedicated software.
## Camera Identification
- **Camera Name**: camera1 (Blower-Yield-Cam)
- **Machine Topic**: blower_separator
- **Purpose**: Monitors the blower separator machine
## Configuration Summary
Based on the camera settings screenshots, the following configuration has been applied to Camera1:
### Exposure Settings
- **Mode**: Manual (not Auto)
- **Exposure Time**: 1.0ms (1000μs)
- **Gain**: 3.5x (350 in camera units)
- **Anti-Flicker**: Enabled (50Hz mode)
### Color Processing Settings
- **White Balance Mode**: Manual (not Auto)
- **Color Temperature**: D65 (6500K)
- **RGB Gain Values**:
- Red Gain: 1.00
- Green Gain: 1.00
- Blue Gain: 1.00
- **Saturation**: 100 (normal)
### LUT (Look-Up Table) Settings
- **Mode**: Dynamically generated (not Preset or Custom)
- **Gamma**: 1.00 (100 in config units)
- **Contrast**: 100 (normal)
### Advanced Settings
- **Anti-Flicker**: Enabled
- **Light Frequency**: 60Hz (1 in config)
- **Bit Depth**: 8-bit
- **HDR**: Disabled
## Configuration Mapping
The screenshots show these key settings that have been mapped to the config.json:
| Screenshot Setting | Config Parameter | Value | Notes |
|-------------------|------------------|-------|-------|
| Manual Exposure | auto_exposure | false | Exposure mode set to manual |
| Time(ms): 1.0000 | exposure_ms | 1.0 | Exposure time in milliseconds |
| Gain(multiple): 3.500 | gain | 3.5 | Analog gain multiplier |
| Manual White Balance | auto_white_balance | false | Manual WB mode |
| Color Temperature: D65 | color_temperature_preset | 6500 | D65 = 6500K |
| Red Gain: 1.00 | wb_red_gain | 1.0 | Manual RGB gain |
| Green Gain: 1.00 | wb_green_gain | 1.0 | Manual RGB gain |
| Blue Gain: 1.00 | wb_blue_gain | 1.0 | Manual RGB gain |
| Saturation: 100 | saturation | 100 | Color saturation |
| Gamma: 1.00 | gamma | 100 | Gamma correction |
| Contrast: 100 | contrast | 100 | Image contrast |
| 50HZ Anti-Flicker | anti_flicker_enabled | true | Flicker reduction |
| 60Hz frequency | light_frequency | 1 | Power frequency |
## Current Configuration
The current config.json for camera1 includes:
```json
{
"name": "camera1",
"machine_topic": "blower_separator",
"storage_path": "/storage/camera1",
"exposure_ms": 1.0,
"gain": 3.5,
"target_fps": 0,
"enabled": true,
"auto_start_recording_enabled": true,
"auto_recording_max_retries": 3,
"auto_recording_retry_delay_seconds": 2,
"sharpness": 100,
"contrast": 100,
"saturation": 100,
"gamma": 100,
"noise_filter_enabled": false,
"denoise_3d_enabled": false,
"auto_white_balance": false,
"color_temperature_preset": 6500,
"anti_flicker_enabled": true,
"light_frequency": 1,
"bit_depth": 8,
"hdr_enabled": false,
"hdr_gain_mode": 0
}
```
## Camera Preview Enhancement
**Important Update**: The camera preview/streaming functionality has been enhanced to apply all default configuration settings from config.json, ensuring that preview images match the quality and appearance of recorded videos.
### What This Means for Camera1
When you view the camera preview, you'll now see:
- **Manual exposure** (1.0ms) and **high gain** (3.5x) applied
- **50Hz anti-flicker** filtering active
- **Manual white balance** with balanced RGB gains (1.0, 1.0, 1.0)
- **Standard image processing** (sharpness: 100, contrast: 100, gamma: 100, saturation: 100)
- **D65 color temperature** (6500K) applied
This ensures the preview accurately represents what will be recorded.
## Notes
1. **Machine Topic Correction**: The machine topic has been corrected from "vibratory_conveyor" to "blower_separator" to match the camera's actual monitoring purpose.
2. **Manual White Balance**: The camera is configured for manual white balance with D65 color temperature, which is appropriate for daylight conditions.
3. **RGB Gain Support**: The current configuration system needs to be extended to support individual RGB gain values for manual white balance fine-tuning.
4. **Anti-Flicker**: Enabled to reduce artificial lighting interference, set to 60Hz to match North American power frequency.
5. **LUT Mode**: The camera uses dynamically generated LUT with gamma=1.00 and contrast=100, which provides linear response.
## Future Enhancements
To fully support all settings shown in the screenshots, the following parameters should be added to the configuration system:
- `wb_red_gain`: Red channel gain for manual white balance (0.0-3.99)
- `wb_green_gain`: Green channel gain for manual white balance (0.0-3.99)
- `wb_blue_gain`: Blue channel gain for manual white balance (0.0-3.99)
- `lut_mode`: LUT generation mode (0=dynamic, 1=preset, 2=custom)
- `lut_preset`: Preset LUT selection when using preset mode

View File

@@ -0,0 +1,150 @@
# Conveyor Camera (Camera2) Configuration
This document describes the default configuration for the conveyor camera (Camera2) based on the GigE camera settings from the dedicated software.
## Camera Identification
- **Camera Name**: camera2 (Cracker-Cam)
- **Machine Topic**: vibratory_conveyor
- **Purpose**: Monitors the vibratory conveyor/cracker machine
## Configuration Summary
Based on the camera settings screenshots, the following configuration has been applied to Camera2:
### Color Processing Settings
- **White Balance Mode**: Manual (not Auto)
- **Color Temperature**: D65 (6500K)
- **RGB Gain Values**:
- Red Gain: 1.01
- Green Gain: 1.00
- Blue Gain: 0.87
- **Saturation**: 100 (normal)
### LUT (Look-Up Table) Settings
- **Mode**: Dynamically generated (not Preset or Custom)
- **Gamma**: 1.00 (100 in config units)
- **Contrast**: 100 (normal)
### Graphic Processing Settings
- **Sharpness Level**: 0 (no sharpening applied)
- **Noise Reduction**:
- Denoise2D: Disabled
- Denoise3D: Disabled
- **Rotation**: Disabled
- **Lens Distortion Correction**: Disabled
- **Dead Pixel Correction**: Enabled
- **Flat Fielding Correction**: Disabled
## Configuration Mapping
The screenshots show these key settings that have been mapped to the config.json:
| Screenshot Setting | Config Parameter | Value | Notes |
|-------------------|------------------|-------|-------|
| Manual White Balance | auto_white_balance | false | Manual WB mode |
| Color Temperature: D65 | color_temperature_preset | 6500 | D65 = 6500K |
| Red Gain: 1.01 | wb_red_gain | 1.01 | Manual RGB gain |
| Green Gain: 1.00 | wb_green_gain | 1.0 | Manual RGB gain |
| Blue Gain: 0.87 | wb_blue_gain | 0.87 | Manual RGB gain |
| Saturation: 100 | saturation | 100 | Color saturation |
| Gamma: 1.00 | gamma | 100 | Gamma correction |
| Contrast: 100 | contrast | 100 | Image contrast |
| Sharpen Level: 0 | sharpness | 0 | No sharpening |
| Denoise2D: Disabled | noise_filter_enabled | false | Basic noise filter off |
| Denoise3D: Disable | denoise_3d_enabled | false | Advanced denoising off |
## Current Configuration
The current config.json for camera2 includes:
```json
{
"name": "camera2",
"machine_topic": "vibratory_conveyor",
"storage_path": "/storage/camera2",
"exposure_ms": 0.5,
"gain": 0.3,
"target_fps": 0,
"enabled": true,
"auto_start_recording_enabled": true,
"auto_recording_max_retries": 3,
"auto_recording_retry_delay_seconds": 2,
"sharpness": 0,
"contrast": 100,
"saturation": 100,
"gamma": 100,
"noise_filter_enabled": false,
"denoise_3d_enabled": false,
"auto_white_balance": false,
"color_temperature_preset": 6500,
"wb_red_gain": 1.01,
"wb_green_gain": 1.0,
"wb_blue_gain": 0.87,
"anti_flicker_enabled": false,
"light_frequency": 1,
"bit_depth": 8,
"hdr_enabled": false,
"hdr_gain_mode": 0
}
```
## Key Differences from Camera1 (Blower Camera)
1. **RGB Gain Tuning**: Camera2 has custom RGB gains (R:1.01, G:1.00, B:0.87) vs Camera1's balanced gains (all 1.0)
2. **Sharpness**: Camera2 has sharpness disabled (0) vs Camera1's normal sharpness (100)
3. **Exposure/Gain**: Camera2 uses lower exposure (0.5ms) and gain (0.3x) vs Camera1's higher values (1.0ms, 3.5x)
4. **Anti-Flicker**: Camera2 has anti-flicker disabled vs Camera1's enabled anti-flicker
## Notes
1. **Custom White Balance**: Camera2 uses manual white balance with custom RGB gains, suggesting specific lighting conditions or color correction requirements for the conveyor monitoring.
2. **No Sharpening**: Sharpness is set to 0, indicating the raw image quality is preferred without artificial enhancement.
3. **Minimal Noise Reduction**: Both 2D and 3D denoising are disabled, prioritizing image authenticity over noise reduction.
4. **Dead Pixel Correction**: Enabled to handle any defective pixels on the sensor.
5. **Lower Sensitivity**: The lower exposure and gain settings suggest better lighting conditions or different monitoring requirements compared to the blower camera.
## Camera Preview Enhancement
**Important Update**: The camera preview/streaming functionality has been enhanced to apply all default configuration settings from config.json, ensuring that preview images match the quality and appearance of recorded videos.
### What Changed
Previously, camera preview only applied basic settings (exposure, gain, trigger mode). Now, the preview applies the complete configuration including:
- **Image Quality**: Sharpness, contrast, gamma, saturation
- **Color Processing**: White balance mode, color temperature, RGB gains
- **Advanced Settings**: Anti-flicker, light frequency, HDR settings
- **Noise Reduction**: Filter and 3D denoising settings (where supported)
### Benefits
1. **WYSIWYG Preview**: What you see in the preview is exactly what gets recorded
2. **Accurate Color Representation**: Manual white balance and RGB gains are applied to preview
3. **Consistent Image Quality**: Sharpness, contrast, and gamma settings match recording
4. **Proper Exposure**: Anti-flicker and lighting frequency settings are applied
### Technical Implementation
The `CameraStreamer` class now includes the same comprehensive configuration methods as `CameraRecorder`:
- `_configure_image_quality()`: Applies sharpness, contrast, gamma, saturation
- `_configure_color_settings()`: Applies white balance mode, color temperature, RGB gains
- `_configure_advanced_settings()`: Applies anti-flicker, light frequency, HDR
- `_configure_noise_reduction()`: Applies noise filter settings
These methods are called during camera initialization for streaming, ensuring all config.json settings are applied.
## Future Enhancements
Additional parameters that could be added to support all graphic processing features:
- `rotation_angle`: Image rotation (0, 90, 180, 270 degrees)
- `lens_distortion_correction`: Enable/disable lens distortion correction
- `dead_pixel_correction`: Enable/disable dead pixel correction
- `flat_fielding_correction`: Enable/disable flat fielding correction
- `mirror_horizontal`: Horizontal mirroring
- `mirror_vertical`: Vertical mirroring

View File

@@ -0,0 +1,159 @@
# Camera Preview Enhancement
## Overview
The camera preview/streaming functionality has been significantly enhanced to apply all default configuration settings from `config.json`, ensuring that preview images accurately represent what will be recorded.
## Problem Solved
Previously, camera preview only applied basic settings (exposure, gain, trigger mode, frame rate), while recording applied the full configuration. This meant:
- Preview images looked different from recorded videos
- Color balance, sharpness, and other image quality settings were not visible in preview
- Users couldn't accurately assess the final recording quality from the preview
## Solution Implemented
The `CameraStreamer` class has been enhanced with comprehensive configuration methods that mirror those in `CameraRecorder`:
### New Configuration Methods Added
1. **`_configure_image_quality()`**
- Applies sharpness settings (0-200)
- Applies contrast settings (0-200)
- Applies gamma correction (0-300)
- Applies saturation for color cameras (0-200)
2. **`_configure_color_settings()`**
- Sets white balance mode (auto/manual)
- Applies color temperature presets
- Sets manual RGB gains for precise color tuning
3. **`_configure_advanced_settings()`**
- Enables/disables anti-flicker filtering
- Sets light frequency (50Hz/60Hz)
- Configures HDR settings when available
4. **`_configure_noise_reduction()`**
- Configures noise filter settings
- Configures 3D denoising settings
### Enhanced Main Configuration Method
The `_configure_streaming_settings()` method now calls all configuration methods:
```python
def _configure_streaming_settings(self):
"""Configure camera settings from config.json for streaming"""
try:
# Basic settings (existing)
mvsdk.CameraSetTriggerMode(self.hCamera, 0)
mvsdk.CameraSetAeState(self.hCamera, 0)
exposure_us = int(self.camera_config.exposure_ms * 1000)
mvsdk.CameraSetExposureTime(self.hCamera, exposure_us)
gain_value = int(self.camera_config.gain * 100)
mvsdk.CameraSetAnalogGain(self.hCamera, gain_value)
# Comprehensive configuration (new)
self._configure_image_quality()
self._configure_noise_reduction()
if not self.monoCamera:
self._configure_color_settings()
self._configure_advanced_settings()
except Exception as e:
self.logger.warning(f"Could not configure some streaming settings: {e}")
```
## Benefits
### 1. WYSIWYG Preview
- **What You See Is What You Get**: Preview now accurately represents final recording quality
- **Real-time Assessment**: Users can evaluate recording quality before starting actual recording
- **Consistent Experience**: No surprises when comparing preview to recorded footage
### 2. Accurate Color Representation
- **Manual White Balance**: RGB gains are applied to preview for accurate color reproduction
- **Color Temperature**: D65 or other presets are applied consistently
- **Saturation**: Color intensity matches recording settings
### 3. Proper Image Quality
- **Sharpness**: Edge enhancement settings are visible in preview
- **Contrast**: Dynamic range adjustments are applied
- **Gamma**: Brightness curve corrections are active
### 4. Environmental Adaptation
- **Anti-Flicker**: Artificial lighting interference is filtered in preview
- **Light Frequency**: 50Hz/60Hz settings match local power grid
- **HDR**: High dynamic range processing when enabled
## Camera-Specific Impact
### Camera1 (Blower Separator)
Preview now shows:
- Manual exposure (1.0ms) and high gain (3.5x)
- 50Hz anti-flicker filtering
- Manual white balance with balanced RGB gains (1.0, 1.0, 1.0)
- Standard image processing (sharpness: 100, contrast: 100, gamma: 100, saturation: 100)
- D65 color temperature (6500K)
### Camera2 (Conveyor/Cracker)
Preview now shows:
- Manual exposure (0.5ms) and lower gain (0.3x)
- Custom RGB color tuning (R:1.01, G:1.00, B:0.87)
- No image sharpening (sharpness: 0)
- Enhanced saturation (100) and proper gamma (100)
- D65 color temperature with manual white balance
## Technical Implementation Details
### Error Handling
- All configuration methods include try-catch blocks
- Warnings are logged for unsupported features
- Graceful degradation when SDK functions are unavailable
- Streaming continues even if some settings fail to apply
### SDK Compatibility
- Checks for function availability before calling
- Handles different SDK versions gracefully
- Logs informational messages for unavailable features
### Performance Considerations
- Configuration is applied once during camera initialization
- No performance impact on streaming frame rate
- Separate camera instance for streaming (doesn't interfere with recording)
## Usage
No changes required for users - the enhancement is automatic:
1. **Start Preview**: Use existing preview endpoints
2. **View Stream**: Camera automatically applies all config.json settings
3. **Compare**: Preview now matches recording quality exactly
### API Endpoints (unchanged)
- `GET /cameras/{camera_name}/stream` - Get live MJPEG stream
- `POST /cameras/{camera_name}/start-stream` - Start streaming
- `POST /cameras/{camera_name}/stop-stream` - Stop streaming
## Future Enhancements
Additional settings that could be added to further improve preview accuracy:
1. **Geometric Corrections**
- Lens distortion correction
- Dead pixel correction
- Flat fielding correction
2. **Image Transformations**
- Rotation (90°, 180°, 270°)
- Horizontal/vertical mirroring
3. **Advanced Processing**
- Custom LUT (Look-Up Table) support
- Advanced noise reduction algorithms
- Real-time image enhancement filters
## Conclusion
This enhancement significantly improves the user experience by providing accurate, real-time preview of camera output with all configuration settings applied. Users can now confidently assess recording quality, adjust settings, and ensure optimal camera performance before starting critical recordings.

View File

@@ -0,0 +1,262 @@
# Auto-Recording Feature Implementation Guide
## 🎯 Overview for React App Development
This document provides a comprehensive guide for updating the React application to support the new auto-recording feature that was added to the USDA Vision Camera System.
> **📚 For complete API reference**: See the [USDA Vision Camera System API Documentation](../API_DOCUMENTATION.md) for detailed endpoint specifications and examples.
## 📋 What Changed in the Backend
### New API Endpoints Added
1. **Enable Auto-Recording**
```http
POST /cameras/{camera_name}/auto-recording/enable
Response: AutoRecordingConfigResponse
```
2. **Disable Auto-Recording**
```http
POST /cameras/{camera_name}/auto-recording/disable
Response: AutoRecordingConfigResponse
```
3. **Get Auto-Recording Status**
```http
GET /auto-recording/status
Response: AutoRecordingStatusResponse
```
### Updated API Responses
#### CameraStatusResponse (Updated)
```typescript
interface CameraStatusResponse {
name: string;
status: string;
is_recording: boolean;
last_checked: string;
last_error?: string;
device_info?: any;
current_recording_file?: string;
recording_start_time?: string;
// NEW AUTO-RECORDING FIELDS
auto_recording_enabled: boolean;
auto_recording_active: boolean;
auto_recording_failure_count: number;
auto_recording_last_attempt?: string;
auto_recording_last_error?: string;
}
```
#### CameraConfigResponse (Updated)
```typescript
interface CameraConfigResponse {
name: string;
machine_topic: string;
storage_path: string;
enabled: boolean;
// NEW AUTO-RECORDING CONFIG FIELDS
auto_start_recording_enabled: boolean;
auto_recording_max_retries: number;
auto_recording_retry_delay_seconds: number;
// ... existing fields (exposure_ms, gain, etc.)
}
```
#### New Response Types
```typescript
interface AutoRecordingConfigResponse {
success: boolean;
message: string;
camera_name: string;
enabled: boolean;
}
interface AutoRecordingStatusResponse {
running: boolean;
auto_recording_enabled: boolean;
retry_queue: Record<string, any>;
enabled_cameras: string[];
}
```
## 🎨 React App UI Requirements
### 1. Camera Status Display Updates
**Add to Camera Cards/Components:**
- Auto-recording enabled/disabled indicator
- Auto-recording active status (when machine is ON and auto-recording)
- Failure count display (if > 0)
- Last auto-recording error (if any)
- Visual distinction between manual and auto-recording
**Example UI Elements:**
```jsx
// Auto-recording status badge
{camera.auto_recording_enabled && (
<Badge variant={camera.auto_recording_active ? "success" : "secondary"}>
Auto-Recording {camera.auto_recording_active ? "Active" : "Enabled"}
</Badge>
)}
// Failure indicator
{camera.auto_recording_failure_count > 0 && (
<Alert variant="warning">
Auto-recording failures: {camera.auto_recording_failure_count}
</Alert>
)}
```
### 2. Auto-Recording Controls
**Add Toggle Controls:**
- Enable/Disable auto-recording per camera
- Global auto-recording status display
- Retry queue monitoring
**Example Control Component:**
```jsx
const AutoRecordingToggle = ({ camera, onToggle }) => {
const handleToggle = async () => {
const endpoint = camera.auto_recording_enabled ? 'disable' : 'enable';
await fetch(`/cameras/${camera.name}/auto-recording/${endpoint}`, {
method: 'POST'
});
onToggle();
};
return (
<Switch
checked={camera.auto_recording_enabled}
onChange={handleToggle}
label="Auto-Recording"
/>
);
};
```
### 3. Machine State Integration
**Display Machine Status:**
- Show which machine each camera monitors
- Display current machine state (ON/OFF)
- Show correlation between machine state and recording status
**Camera-Machine Mapping:**
- Camera 1 → Vibratory Conveyor (conveyor/cracker cam)
- Camera 2 → Blower Separator (blower separator)
### 4. Auto-Recording Dashboard
**Create New Dashboard Section:**
- Overall auto-recording system status
- List of cameras with auto-recording enabled
- Active retry queue display
- Recent auto-recording events/logs
## 🔧 Implementation Steps for React App
### Step 1: Update TypeScript Interfaces
```typescript
// Update existing interfaces in your types file
// Add new interfaces for auto-recording responses
```
### Step 2: Update API Service Functions
```typescript
// Add new API calls
export const enableAutoRecording = (cameraName: string) =>
fetch(`/cameras/${cameraName}/auto-recording/enable`, { method: 'POST' });
export const disableAutoRecording = (cameraName: string) =>
fetch(`/cameras/${cameraName}/auto-recording/disable`, { method: 'POST' });
export const getAutoRecordingStatus = () =>
fetch('/auto-recording/status').then(res => res.json());
```
### Step 3: Update Camera Components
- Add auto-recording status indicators
- Add enable/disable controls
- Update recording status display to distinguish auto vs manual
### Step 4: Create Auto-Recording Management Panel
- System-wide auto-recording status
- Per-camera auto-recording controls
- Retry queue monitoring
- Error reporting and alerts
### Step 5: Update State Management
```typescript
// Add auto-recording state to your store/context
interface AppState {
cameras: CameraStatusResponse[];
autoRecordingStatus: AutoRecordingStatusResponse;
// ... existing state
}
```
## 🎯 Key User Experience Considerations
### Visual Indicators
1. **Recording Status Hierarchy:**
- Manual Recording (highest priority - red/prominent)
- Auto-Recording Active (green/secondary)
- Auto-Recording Enabled but Inactive (blue/subtle)
- Auto-Recording Disabled (gray/muted)
2. **Machine State Correlation:**
- Show machine ON/OFF status next to camera
- Indicate when auto-recording should be active
- Alert if machine is ON but auto-recording failed
3. **Error Handling:**
- Clear error messages for auto-recording failures
- Retry count display
- Last attempt timestamp
- Quick retry/reset options
### User Controls
1. **Quick Actions:**
- Toggle auto-recording per camera
- Force retry failed auto-recording
- Override auto-recording (manual control)
2. **Configuration:**
- Adjust retry settings
- Change machine-camera mappings
- Set recording parameters for auto-recording
## 🚨 Important Notes
### Behavior Rules
1. **Manual Override:** Manual recording always takes precedence over auto-recording
2. **Non-Blocking:** Auto-recording status checks don't interfere with camera operation
3. **Machine Correlation:** Auto-recording only activates when the associated machine turns ON
4. **Failure Handling:** Failed auto-recording attempts are retried automatically with exponential backoff
### API Polling Recommendations
- Poll camera status every 2-3 seconds for real-time updates
- Poll auto-recording status every 5-10 seconds
- Use WebSocket connections if available for real-time machine state updates
## 📱 Mobile Considerations
- Auto-recording controls should be easily accessible on mobile
- Status indicators should be clear and readable on small screens
- Consider collapsible sections for detailed auto-recording information
## 🔍 Testing Checklist
- [ ] Auto-recording toggle works for each camera
- [ ] Status updates reflect machine state changes
- [ ] Error states are clearly displayed
- [ ] Manual recording overrides auto-recording
- [ ] Retry mechanism is visible to users
- [ ] Mobile interface is functional
This guide provides everything needed to update the React app to fully support the new auto-recording feature!

View File

@@ -0,0 +1,158 @@
# Camera Recovery and Diagnostics Guide
This guide explains the new camera recovery functionality implemented in the USDA Vision Camera System API.
## Overview
The system now includes comprehensive camera recovery capabilities to handle connection issues, initialization failures, and other camera-related problems. These features use the underlying mvsdk (python demo) library functions to perform various recovery operations.
## Available Recovery Operations
### 1. Connection Test (`/cameras/{camera_name}/test-connection`)
- **Purpose**: Test if the camera connection is working
- **SDK Function**: `CameraConnectTest()`
- **Use Case**: Diagnose connection issues
- **HTTP Method**: POST
- **Response**: `CameraTestResponse`
### 2. Reconnect (`/cameras/{camera_name}/reconnect`)
- **Purpose**: Soft reconnection to the camera
- **SDK Function**: `CameraReConnect()`
- **Use Case**: Most common fix for connection issues
- **HTTP Method**: POST
- **Response**: `CameraRecoveryResponse`
### 3. Restart Grab (`/cameras/{camera_name}/restart-grab`)
- **Purpose**: Restart the camera grab process
- **SDK Function**: `CameraRestartGrab()`
- **Use Case**: Fix issues with image capture
- **HTTP Method**: POST
- **Response**: `CameraRecoveryResponse`
### 4. Reset Timestamp (`/cameras/{camera_name}/reset-timestamp`)
- **Purpose**: Reset camera timestamp
- **SDK Function**: `CameraRstTimeStamp()`
- **Use Case**: Fix timing-related issues
- **HTTP Method**: POST
- **Response**: `CameraRecoveryResponse`
### 5. Full Reset (`/cameras/{camera_name}/full-reset`)
- **Purpose**: Complete camera reset (uninitialize and reinitialize)
- **SDK Functions**: `CameraUnInit()` + `CameraInit()`
- **Use Case**: Hard reset for persistent issues
- **HTTP Method**: POST
- **Response**: `CameraRecoveryResponse`
### 6. Reinitialize (`/cameras/{camera_name}/reinitialize`)
- **Purpose**: Reinitialize cameras that failed initial setup
- **SDK Functions**: Complete recorder recreation
- **Use Case**: Cameras that never initialized properly
- **HTTP Method**: POST
- **Response**: `CameraRecoveryResponse`
## Recommended Troubleshooting Workflow
When a camera has issues, follow this order:
1. **Test Connection** - Diagnose the problem
```http
POST http://localhost:8000/cameras/camera1/test-connection
```
2. **Try Reconnect** - Most common fix
```http
POST http://localhost:8000/cameras/camera1/reconnect
```
3. **Restart Grab** - If reconnect doesn't work
```http
POST http://localhost:8000/cameras/camera1/restart-grab
```
4. **Full Reset** - For persistent issues
```http
POST http://localhost:8000/cameras/camera1/full-reset
```
5. **Reinitialize** - For cameras that never worked
```http
POST http://localhost:8000/cameras/camera1/reinitialize
```
## Response Format
All recovery operations return structured responses:
### CameraTestResponse
```json
{
"success": true,
"message": "Camera camera1 connection test passed",
"camera_name": "camera1",
"timestamp": "2024-01-01T12:00:00"
}
```
### CameraRecoveryResponse
```json
{
"success": true,
"message": "Camera camera1 reconnected successfully",
"camera_name": "camera1",
"operation": "reconnect",
"timestamp": "2024-01-01T12:00:00"
}
```
## Implementation Details
### CameraRecorder Methods
- `test_connection()`: Tests camera connection
- `reconnect()`: Performs soft reconnection
- `restart_grab()`: Restarts grab process
- `reset_timestamp()`: Resets timestamp
- `full_reset()`: Complete reset with cleanup and reinitialization
### CameraManager Methods
- `test_camera_connection(camera_name)`: Test specific camera
- `reconnect_camera(camera_name)`: Reconnect specific camera
- `restart_camera_grab(camera_name)`: Restart grab for specific camera
- `reset_camera_timestamp(camera_name)`: Reset timestamp for specific camera
- `full_reset_camera(camera_name)`: Full reset for specific camera
- `reinitialize_failed_camera(camera_name)`: Reinitialize failed camera
### State Management
All recovery operations automatically update the camera status in the state manager:
- Success: Status set to "connected"
- Failure: Status set to appropriate error state with error message
## Error Handling
The system includes comprehensive error handling:
- SDK exceptions are caught and logged
- State manager is updated with error information
- Proper HTTP status codes are returned
- Detailed error messages are provided
## Testing
Use the provided test files:
- `api-tests.http`: Manual API testing with VS Code REST Client
- `test_camera_recovery_api.py`: Automated testing script
## Safety Features
- Recording is automatically stopped before recovery operations
- Camera resources are properly cleaned up
- Thread-safe operations with proper locking
- Graceful error handling prevents system crashes
## Common Use Cases
1. **Camera Lost Connection**: Use reconnect
2. **Camera Won't Capture**: Use restart-grab
3. **Camera Initialization Failed**: Use reinitialize
4. **Persistent Issues**: Use full-reset
5. **Timing Problems**: Use reset-timestamp
This recovery system provides robust tools to handle most camera-related issues without requiring system restart or manual intervention.

View File

@@ -0,0 +1,187 @@
# MQTT Console Logging & API Guide
## 🎯 Overview
Your USDA Vision Camera System now has **enhanced MQTT console logging** and **comprehensive API endpoints** for monitoring machine status via MQTT.
## ✨ What's New
### 1. **Enhanced Console Logging**
- **Colorful emoji-based console output** for all MQTT events
- **Real-time visibility** of MQTT connections, subscriptions, and messages
- **Clear status indicators** for debugging and monitoring
### 2. **New MQTT Status API Endpoint**
- **GET /mqtt/status** - Detailed MQTT client statistics
- **Message counts, error tracking, uptime monitoring**
- **Real-time connection status and broker information**
### 3. **Existing Machine Status APIs** (already available)
- **GET /machines** - All machine states from MQTT
- **GET /system/status** - Overall system status including MQTT
## 🖥️ Console Logging Examples
When you run the system, you'll see:
```bash
🔗 MQTT CONNECTED: 192.168.1.110:1883
📋 MQTT SUBSCRIBED: vibratory_conveyor → vision/vibratory_conveyor/state
📋 MQTT SUBSCRIBED: blower_separator → vision/blower_separator/state
📡 MQTT MESSAGE: vibratory_conveyor → on
📡 MQTT MESSAGE: blower_separator → off
⚠️ MQTT DISCONNECTED: Unexpected disconnection (code: 1)
🔗 MQTT CONNECTED: 192.168.1.110:1883
```
## 🌐 API Endpoints
### MQTT Status
```http
GET http://localhost:8000/mqtt/status
```
**Response:**
```json
{
"connected": true,
"broker_host": "192.168.1.110",
"broker_port": 1883,
"subscribed_topics": [
"vision/vibratory_conveyor/state",
"vision/blower_separator/state"
],
"last_message_time": "2025-07-28T12:00:00",
"message_count": 42,
"error_count": 0,
"uptime_seconds": 3600.5
}
```
### Machine Status
```http
GET http://localhost:8000/machines
```
**Response:**
```json
{
"vibratory_conveyor": {
"name": "vibratory_conveyor",
"state": "on",
"last_updated": "2025-07-28T12:00:00",
"last_message": "on",
"mqtt_topic": "vision/vibratory_conveyor/state"
},
"blower_separator": {
"name": "blower_separator",
"state": "off",
"last_updated": "2025-07-28T12:00:00",
"last_message": "off",
"mqtt_topic": "vision/blower_separator/state"
}
}
```
### System Status
```http
GET http://localhost:8000/system/status
```
**Response:**
```json
{
"system_started": true,
"mqtt_connected": true,
"last_mqtt_message": "2025-07-28T12:00:00",
"machines": { ... },
"cameras": { ... },
"active_recordings": 0,
"total_recordings": 5,
"uptime_seconds": 3600.5
}
```
## 🚀 How to Use
### 1. **Start the Full System**
```bash
python main.py
```
You'll see enhanced console logging for all MQTT events.
### 2. **Test MQTT Demo (MQTT only)**
```bash
python demo_mqtt_console.py
```
Shows just the MQTT client with enhanced logging.
### 3. **Test API Endpoints**
```bash
python test_mqtt_logging.py
```
Tests all the API endpoints and shows expected responses.
### 4. **Query APIs Directly**
```bash
# Check MQTT status
curl http://localhost:8000/mqtt/status
# Check machine states
curl http://localhost:8000/machines
# Check overall system status
curl http://localhost:8000/system/status
```
## 🔧 Configuration
The MQTT settings are in `config.json`:
```json
{
"mqtt": {
"broker_host": "192.168.1.110",
"broker_port": 1883,
"username": null,
"password": null,
"topics": {
"vibratory_conveyor": "vision/vibratory_conveyor/state",
"blower_separator": "vision/blower_separator/state"
}
}
}
```
## 🎨 Console Output Features
- **🔗 Connection Events**: Green for successful connections
- **📋 Subscriptions**: Blue for topic subscriptions
- **📡 Messages**: Real-time message display with machine name and payload
- **⚠️ Warnings**: Yellow for unexpected disconnections
- **❌ Errors**: Red for connection failures and errors
- **❓ Unknown Topics**: Purple for unrecognized MQTT topics
## 📊 Monitoring & Debugging
### Real-time Monitoring
- **Console**: Watch live MQTT events as they happen
- **API**: Query `/mqtt/status` for statistics and health
- **Logs**: Check `usda_vision_system.log` for detailed logs
### Troubleshooting
1. **No MQTT messages?** Check broker connectivity and topic configuration
2. **Connection issues?** Verify broker host/port in config.json
3. **API not responding?** Ensure the system is running with `python main.py`
## 🎯 Use Cases
1. **Development**: See MQTT messages in real-time while developing
2. **Debugging**: Identify connection issues and message patterns
3. **Monitoring**: Use APIs to build dashboards or monitoring tools
4. **Integration**: Query machine states from external applications
5. **Maintenance**: Track MQTT statistics and error rates
---
**🎉 Your MQTT monitoring is now fully enhanced with both console logging and comprehensive APIs!**

View File

@@ -0,0 +1,240 @@
# 🎥 USDA Vision Camera Live Streaming Guide
This guide explains how to use the new live preview streaming functionality that allows you to view camera feeds in real-time without blocking recording operations.
## 🌟 Key Features
- **Non-blocking streaming**: Live preview doesn't interfere with recording
- **Separate camera connections**: Streaming uses independent camera instances
- **MJPEG streaming**: Standard web-compatible video streaming
- **Multiple concurrent viewers**: Multiple browsers can view the same stream
- **REST API control**: Start/stop streaming via API endpoints
- **Web interface**: Ready-to-use HTML interface for live preview
## 🏗️ Architecture
The streaming system creates separate camera connections for preview that are independent from recording:
```
Camera Hardware
├── Recording Connection (CameraRecorder)
│ ├── Used for video file recording
│ ├── Triggered by MQTT machine states
│ └── High quality, full FPS
└── Streaming Connection (CameraStreamer)
├── Used for live preview
├── Controlled via API endpoints
└── Optimized for web viewing (lower FPS, JPEG compression)
```
## 🚀 Quick Start
### 1. Start the System
```bash
python main.py
```
### 2. Open the Web Interface
Open `camera_preview.html` in your browser and click "Start Stream" for any camera.
### 3. API Usage
```bash
# Start streaming for camera1
curl -X POST http://localhost:8000/cameras/camera1/start-stream
# View live stream (open in browser)
http://localhost:8000/cameras/camera1/stream
# Stop streaming
curl -X POST http://localhost:8000/cameras/camera1/stop-stream
```
## 📡 API Endpoints
### Start Streaming
```http
POST /cameras/{camera_name}/start-stream
```
**Response:**
```json
{
"success": true,
"message": "Started streaming for camera camera1"
}
```
### Stop Streaming
```http
POST /cameras/{camera_name}/stop-stream
```
**Response:**
```json
{
"success": true,
"message": "Stopped streaming for camera camera1"
}
```
### Live Stream (MJPEG)
```http
GET /cameras/{camera_name}/stream
```
**Response:** Multipart MJPEG stream
**Content-Type:** `multipart/x-mixed-replace; boundary=frame`
## 🌐 Web Interface Usage
The included `camera_preview.html` provides a complete web interface:
1. **Camera Grid**: Shows all configured cameras
2. **Stream Controls**: Start/Stop/Refresh buttons for each camera
3. **Live Preview**: Real-time video feed display
4. **Status Information**: System and camera status
5. **Responsive Design**: Works on desktop and mobile
### Features:
- ✅ Real-time camera status
- ✅ One-click stream start/stop
- ✅ Automatic stream refresh
- ✅ System health monitoring
- ✅ Error handling and status messages
## 🔧 Technical Details
### Camera Streamer Configuration
- **Preview FPS**: 10 FPS (configurable)
- **JPEG Quality**: 70% (configurable)
- **Frame Buffer**: 5 frames (prevents memory buildup)
- **Timeout**: 200ms per frame capture
### Memory Management
- Automatic frame buffer cleanup
- Queue-based frame management
- Proper camera resource cleanup on stop
### Thread Safety
- Thread-safe streaming operations
- Independent from recording threads
- Proper synchronization with locks
## 🧪 Testing
### Run the Test Script
```bash
python test_streaming.py
```
This will test:
- ✅ API endpoint functionality
- ✅ Stream start/stop operations
- ✅ Concurrent recording and streaming
- ✅ Error handling
### Manual Testing
1. Start the system: `python main.py`
2. Open `camera_preview.html` in browser
3. Start streaming for a camera
4. Trigger recording via MQTT or manual API
5. Verify both work simultaneously
## 🔄 Concurrent Operations
The system supports these concurrent operations:
| Operation | Recording | Streaming | Notes |
|-----------|-----------|-----------|-------|
| Recording Only | ✅ | ❌ | Normal operation |
| Streaming Only | ❌ | ✅ | Preview without recording |
| Both Concurrent | ✅ | ✅ | **Independent connections** |
### Example: Concurrent Usage
```bash
# Start streaming
curl -X POST http://localhost:8000/cameras/camera1/start-stream
# Start recording (while streaming continues)
curl -X POST http://localhost:8000/cameras/camera1/start-recording \
-H "Content-Type: application/json" \
-d '{"filename": "test_recording.avi"}'
# Both operations run independently!
```
## 🛠️ Configuration
### Stream Settings (in CameraStreamer)
```python
self.preview_fps = 10.0 # Lower FPS for preview
self.preview_quality = 70 # JPEG quality (1-100)
self._frame_queue.maxsize = 5 # Frame buffer size
```
### Camera Settings
The streamer uses the same camera configuration as recording:
- Exposure time from `camera_config.exposure_ms`
- Gain from `camera_config.gain`
- Optimized trigger mode for continuous streaming
## 🚨 Important Notes
### Camera Access Patterns
- **Recording**: Blocks camera during active recording
- **Streaming**: Uses separate connection, doesn't block
- **Health Checks**: Brief, non-blocking camera tests
- **Multiple Streams**: Multiple browsers can view same stream
### Performance Considerations
- Streaming uses additional CPU/memory resources
- Lower preview FPS reduces system load
- JPEG compression reduces bandwidth usage
- Frame queue prevents memory buildup
### Error Handling
- Automatic camera resource cleanup
- Graceful handling of camera disconnections
- Stream auto-restart capabilities
- Detailed error logging
## 🔍 Troubleshooting
### Stream Not Starting
1. Check camera availability: `GET /cameras`
2. Verify camera not in error state
3. Check system logs for camera initialization errors
4. Try camera reconnection: `POST /cameras/{name}/reconnect`
### Poor Stream Quality
1. Adjust `preview_quality` setting (higher = better quality)
2. Increase `preview_fps` for smoother video
3. Check network bandwidth
4. Verify camera exposure/gain settings
### Browser Issues
1. Try different browser (Chrome/Firefox recommended)
2. Check browser console for JavaScript errors
3. Verify CORS settings in API server
4. Clear browser cache and refresh
## 📈 Future Enhancements
Potential improvements for the streaming system:
- 🔄 WebRTC support for lower latency
- 📱 Mobile app integration
- 🎛️ Real-time camera setting adjustments
- 📊 Stream analytics and monitoring
- 🔐 Authentication and access control
- 🌐 Multi-camera synchronized viewing
## 📞 Support
For issues with streaming functionality:
1. Check the system logs: `usda_vision_system.log`
2. Run the test script: `python test_streaming.py`
3. Verify API health: `http://localhost:8000/health`
4. Check camera status: `http://localhost:8000/cameras`
---
**✅ Live streaming is now ready for production use!**

146
api/docs/legacy/01README.md Normal file
View File

@@ -0,0 +1,146 @@
# GigE Camera Image Capture
This project provides simple Python scripts to connect to a GigE camera and capture images using the provided SDK.
## Files Overview
### Demo Files (provided with camera)
- `python demo/mvsdk.py` - Main SDK wrapper library
- `python demo/grab.py` - Basic image capture example
- `python demo/cv_grab.py` - OpenCV-based continuous capture
- `python demo/cv_grab_callback.py` - Callback-based capture
- `python demo/readme.txt` - Original demo documentation
### Custom Scripts
- `camera_capture.py` - Standalone script to capture 10 images with 200ms intervals
- `test.ipynb` - Jupyter notebook with the same functionality
- `images/` - Directory where captured images are saved
## Features
- **Automatic camera detection** - Finds and connects to available GigE cameras
- **Configurable capture** - Currently set to capture 10 images with 200ms intervals
- **Both mono and color support** - Automatically detects camera type
- **Timestamped filenames** - Images saved with date/time stamps
- **Error handling** - Robust error handling for camera operations
- **Cross-platform** - Works on Windows and Linux (with appropriate image flipping)
## Requirements
- Python 3.x
- OpenCV (`cv2`)
- NumPy
- Matplotlib (for Jupyter notebook display)
- GigE camera SDK (MVSDK) - included in `python demo/` directory
## Usage
### Option 1: Standalone Script
Run the standalone Python script:
```bash
python camera_capture.py
```
This will:
1. Initialize the camera SDK
2. Detect available cameras
3. Connect to the first camera found
4. Configure camera settings (manual exposure, continuous mode)
5. Capture 10 images with 200ms intervals
6. Save images to the `images/` directory
7. Clean up and close the camera
### Option 2: Jupyter Notebook
Open and run the `test.ipynb` notebook:
```bash
jupyter notebook test.ipynb
```
The notebook provides the same functionality but with:
- Step-by-step execution
- Detailed explanations
- Visual display of the last captured image
- Better error reporting
## Camera Configuration
The scripts are configured with the following default settings:
- **Trigger Mode**: Continuous capture (mode 0)
- **Exposure**: Manual, 30ms
- **Output Format**:
- Monochrome cameras: MONO8
- Color cameras: BGR8
- **Image Processing**: Automatic ISP processing from RAW to RGB/MONO
## Output
Images are saved in the `images/` directory with the following naming convention:
```
image_XX_YYYYMMDD_HHMMSS_mmm.jpg
```
Where:
- `XX` = Image number (01-10)
- `YYYYMMDD_HHMMSS_mmm` = Timestamp with milliseconds
Example: `image_01_20250722_140530_123.jpg`
## Troubleshooting
### Common Issues
1. **"No camera was found!"**
- Check camera connection (Ethernet cable)
- Verify camera power
- Check network settings (camera and PC should be on same subnet)
- Ensure camera drivers are installed
2. **"CameraInit Failed"**
- Camera might be in use by another application
- Check camera permissions
- Try restarting the camera or PC
3. **"Failed to capture image"**
- Check camera settings
- Verify sufficient lighting
- Check exposure settings
4. **Images appear upside down**
- This is handled automatically on Windows
- Linux users may need to adjust the flip settings
### Network Configuration
For GigE cameras, ensure:
- Camera and PC are on the same network segment
- PC network adapter supports Jumbo frames (recommended)
- Firewall allows camera communication
- Sufficient network bandwidth
## Customization
You can modify the scripts to:
- **Change capture count**: Modify the range in the capture loop
- **Adjust timing**: Change the `time.sleep(0.2)` value
- **Modify exposure**: Change the exposure time parameter
- **Change output format**: Modify file format and quality settings
- **Add image processing**: Insert processing steps before saving
## SDK Reference
The camera SDK (`mvsdk.py`) provides extensive functionality:
- Camera enumeration and initialization
- Image capture and processing
- Parameter configuration (exposure, gain, etc.)
- Trigger modes and timing
- Image format conversion
- Error handling
Refer to the original SDK documentation for advanced features.

View File

@@ -0,0 +1,184 @@
# USDA Vision Camera System - Implementation Summary
## 🎉 Project Completed Successfully!
The USDA Vision Camera System has been fully implemented and tested. All components are working correctly and the system is ready for deployment.
## ✅ What Was Built
### Core Architecture
- **Modular Design**: Clean separation of concerns across multiple modules
- **Multi-threading**: Concurrent MQTT listening, camera monitoring, and recording
- **Event-driven**: Thread-safe communication between components
- **Configuration-driven**: JSON-based configuration system
### Key Components
1. **MQTT Integration** (`usda_vision_system/mqtt/`)
- Listens to two machine topics: `vision/vibratory_conveyor/state` and `vision/blower_separator/state`
- Thread-safe message handling with automatic reconnection
- State normalization (on/off/error)
2. **Camera Management** (`usda_vision_system/camera/`)
- Automatic GigE camera discovery using python demo library
- Periodic status monitoring (every 2 seconds)
- Camera initialization and configuration management
- **Discovered Cameras**:
- Blower-Yield-Cam (192.168.1.165)
- Cracker-Cam (192.168.1.167)
3. **Video Recording** (`usda_vision_system/camera/recorder.py`)
- Automatic recording start/stop based on machine states
- Timestamp-based file naming: `camera1_recording_20250726_143022.avi`
- Configurable FPS, exposure, and gain settings
- Thread-safe recording with proper cleanup
4. **Storage Management** (`usda_vision_system/storage/`)
- Organized file storage under `./storage/camera1/` and `./storage/camera2/`
- File indexing and metadata tracking
- Automatic cleanup of old files
- Storage statistics and integrity checking
5. **REST API Server** (`usda_vision_system/api/`)
- FastAPI server on port 8000
- Real-time WebSocket updates
- Manual recording control endpoints
- System status and monitoring endpoints
6. **Comprehensive Logging** (`usda_vision_system/core/logging_config.py`)
- Colored console output
- Rotating log files
- Component-specific log levels
- Performance monitoring and error tracking
## 🚀 How to Use
### Quick Start
```bash
# Run system tests
python test_system.py
# Start the system
python main.py
# Or use the startup script
./start_system.sh
```
### Configuration
Edit `config.json` to customize:
- MQTT broker settings
- Camera configurations
- Storage paths
- System parameters
### API Access
- System status: `http://localhost:8000/system/status`
- Camera status: `http://localhost:8000/cameras`
- Manual recording: `POST http://localhost:8000/cameras/camera1/start-recording`
- Real-time updates: WebSocket at `ws://localhost:8000/ws`
## 📊 Test Results
All system tests passed successfully:
- ✅ Module imports
- ✅ Configuration loading
- ✅ Camera discovery (found 2 cameras)
- ✅ Storage setup
- ✅ MQTT configuration
- ✅ System initialization
- ✅ API endpoints
## 🔧 System Behavior
### Automatic Recording Flow
1. **Machine turns ON** → MQTT message received → Recording starts automatically
2. **Machine turns OFF** → MQTT message received → Recording stops and saves file
3. **Files saved** with timestamp: `camera1_recording_YYYYMMDD_HHMMSS.avi`
### Manual Control
- Start/stop recording via API calls
- Monitor system status in real-time
- Check camera availability on demand
### Dashboard Integration
The system is designed to integrate with your React + Vite + Tailwind + Supabase dashboard:
- REST API for status queries
- WebSocket for real-time updates
- JSON responses for easy frontend consumption
## 📁 Project Structure
```
usda_vision_system/
├── core/ # Configuration, state management, events, logging
├── mqtt/ # MQTT client and message handlers
├── camera/ # Camera management, monitoring, recording
├── storage/ # File organization and management
├── api/ # FastAPI server and WebSocket support
└── main.py # Application coordinator
Supporting Files:
├── main.py # Entry point script
├── config.json # System configuration
├── test_system.py # Test suite
├── start_system.sh # Startup script
└── README_SYSTEM.md # Comprehensive documentation
```
## 🎯 Key Features Delivered
-**Dual MQTT topic listening** for two machines
-**Automatic camera recording** triggered by machine states
-**GigE camera support** using python demo library
-**Thread-safe multi-tasking** (MQTT + camera monitoring + recording)
-**Timestamp-based file naming** in organized directories
-**2-second camera status monitoring** with on-demand checks
-**REST API and WebSocket** for dashboard integration
-**Comprehensive logging** with error tracking
-**Configuration management** via JSON
-**Storage management** with cleanup capabilities
-**Graceful startup/shutdown** with signal handling
## 🔮 Ready for Dashboard Integration
The system provides everything needed for your React dashboard:
```javascript
// Example API usage
const systemStatus = await fetch('http://localhost:8000/system/status');
const cameras = await fetch('http://localhost:8000/cameras');
// WebSocket for real-time updates
const ws = new WebSocket('ws://localhost:8000/ws');
ws.onmessage = (event) => {
const update = JSON.parse(event.data);
// Handle real-time system updates
};
// Manual recording control
await fetch('http://localhost:8000/cameras/camera1/start-recording', {
method: 'POST',
headers: { 'Content-Type': 'application/json' },
body: JSON.stringify({ camera_name: 'camera1' })
});
```
## 🎊 Next Steps
The system is production-ready! You can now:
1. **Deploy** the system on your target hardware
2. **Integrate** with your existing React dashboard
3. **Configure** MQTT topics and camera settings as needed
4. **Monitor** system performance through logs and API endpoints
5. **Extend** functionality as requirements evolve
The modular architecture makes it easy to add new features, cameras, or MQTT topics in the future.
---
**System Status**: ✅ **FULLY OPERATIONAL**
**Test Results**: ✅ **ALL TESTS PASSING**
**Cameras Detected**: ✅ **2 GIGE CAMERAS READY**
**Ready for Production**: ✅ **YES**

View File

@@ -0,0 +1 @@
# USDA-Vision-Cameras

View File

@@ -0,0 +1,249 @@
# USDA Vision Camera System
A comprehensive system for monitoring machines via MQTT and automatically recording video from GigE cameras when machines are active.
## Overview
This system integrates MQTT machine monitoring with automated video recording from GigE cameras. When a machine turns on (detected via MQTT), the system automatically starts recording from the associated camera. When the machine turns off, recording stops and the video is saved with a timestamp.
## Features
- **MQTT Integration**: Listens to multiple machine state topics
- **Automatic Recording**: Starts/stops recording based on machine states
- **GigE Camera Support**: Uses the python demo library (mvsdk) for camera control
- **Multi-threading**: Concurrent MQTT listening, camera monitoring, and recording
- **REST API**: FastAPI server for dashboard integration
- **WebSocket Support**: Real-time status updates
- **Storage Management**: Organized file storage with cleanup capabilities
- **Comprehensive Logging**: Detailed logging with rotation and error tracking
- **Configuration Management**: JSON-based configuration system
## Architecture
```
┌─────────────────┐ ┌─────────────────┐ ┌─────────────────┐
│ MQTT Broker │ │ GigE Camera │ │ Dashboard │
│ │ │ │ │ (React) │
└─────────┬───────┘ └─────────┬───────┘ └─────────┬───────┘
│ │ │
│ Machine States │ Video Streams │ API Calls
│ │ │
┌─────────▼──────────────────────▼──────────────────────▼───────┐
│ USDA Vision Camera System │
├───────────────────────────────────────────────────────────────┤
│ ┌─────────────┐ ┌─────────────┐ ┌─────────────┐ │
│ │ MQTT Client │ │ Camera │ │ API Server │ │
│ │ │ │ Manager │ │ │ │
│ └─────────────┘ └─────────────┘ └─────────────┘ │
│ ┌─────────────┐ ┌─────────────┐ ┌─────────────┐ │
│ │ State │ │ Storage │ │ Event │ │
│ │ Manager │ │ Manager │ │ System │ │
│ └─────────────┘ └─────────────┘ └─────────────┘ │
└───────────────────────────────────────────────────────────────┘
```
## Installation
1. **Prerequisites**:
- Python 3.11+
- GigE cameras with python demo library
- MQTT broker (e.g., Mosquitto)
- uv package manager (recommended)
2. **Install Dependencies**:
```bash
uv sync
```
3. **Setup Storage Directory**:
```bash
sudo mkdir -p /storage
sudo chown $USER:$USER /storage
```
## Configuration
Edit `config.json` to configure your system:
```json
{
"mqtt": {
"broker_host": "192.168.1.110",
"broker_port": 1883,
"topics": {
"vibratory_conveyor": "vision/vibratory_conveyor/state",
"blower_separator": "vision/blower_separator/state"
}
},
"cameras": [
{
"name": "camera1",
"machine_topic": "vibratory_conveyor",
"storage_path": "/storage/camera1",
"exposure_ms": 1.0,
"gain": 3.5,
"target_fps": 3.0,
"enabled": true
}
]
}
```
## Usage
### Basic Usage
1. **Start the System**:
```bash
python main.py
```
2. **With Custom Config**:
```bash
python main.py --config my_config.json
```
3. **Debug Mode**:
```bash
python main.py --log-level DEBUG
```
### API Endpoints
The system provides a REST API on port 8000:
- `GET /system/status` - Overall system status
- `GET /cameras` - All camera statuses
- `GET /machines` - All machine states
- `POST /cameras/{name}/start-recording` - Manual recording start
- `POST /cameras/{name}/stop-recording` - Manual recording stop
- `GET /storage/stats` - Storage statistics
- `WebSocket /ws` - Real-time updates
### Dashboard Integration
The system is designed to integrate with your existing React + Vite + Tailwind + Supabase dashboard:
1. **API Integration**: Use the REST endpoints to display system status
2. **WebSocket**: Connect to `/ws` for real-time updates
3. **Supabase Storage**: Store recording metadata and system logs
## File Organization
```
/storage/
├── camera1/
│ ├── camera1_recording_20250726_143022.avi
│ └── camera1_recording_20250726_143155.avi
├── camera2/
│ ├── camera2_recording_20250726_143025.avi
│ └── camera2_recording_20250726_143158.avi
└── file_index.json
```
## Monitoring and Logging
### Log Files
- `usda_vision_system.log` - Main system log (rotated)
- Console output with colored formatting
- Component-specific log levels
### Performance Monitoring
The system includes built-in performance monitoring:
- Startup times
- Recording session metrics
- MQTT message processing rates
- Camera status check intervals
### Error Tracking
Comprehensive error tracking with:
- Error counts per component
- Detailed error context
- Automatic recovery attempts
## Troubleshooting
### Common Issues
1. **Camera Not Found**:
- Check camera connections
- Verify python demo library installation
- Run camera discovery: Check logs for enumeration results
2. **MQTT Connection Failed**:
- Verify broker IP and port
- Check network connectivity
- Verify credentials if authentication is enabled
3. **Recording Fails**:
- Check storage permissions
- Verify available disk space
- Check camera initialization logs
4. **API Server Won't Start**:
- Check if port 8000 is available
- Verify FastAPI dependencies
- Check firewall settings
### Debug Commands
```bash
# Check system status
curl http://localhost:8000/system/status
# Check camera status
curl http://localhost:8000/cameras
# Manual recording start
curl -X POST http://localhost:8000/cameras/camera1/start-recording \
-H "Content-Type: application/json" \
-d '{"camera_name": "camera1"}'
```
## Development
### Project Structure
```
usda_vision_system/
├── core/ # Core functionality
├── mqtt/ # MQTT client and handlers
├── camera/ # Camera management and recording
├── storage/ # File management
├── api/ # FastAPI server
└── main.py # Application coordinator
```
### Adding New Features
1. **New Camera Type**: Extend `camera/recorder.py`
2. **New MQTT Topics**: Update `config.json` and `mqtt/handlers.py`
3. **New API Endpoints**: Add to `api/server.py`
4. **New Events**: Define in `core/events.py`
### Testing
```bash
# Run basic system test
python -c "from usda_vision_system import USDAVisionSystem; s = USDAVisionSystem(); print('OK')"
# Test MQTT connection
python -c "from usda_vision_system.mqtt.client import MQTTClient; # ... test code"
# Test camera discovery
python -c "import sys; sys.path.append('python demo'); import mvsdk; print(len(mvsdk.CameraEnumerateDevice()))"
```
## License
This project is developed for USDA research purposes.
## Support
For issues and questions:
1. Check the logs in `usda_vision_system.log`
2. Review the troubleshooting section
3. Check API status at `http://localhost:8000/health`

View File

@@ -0,0 +1,190 @@
# Time Synchronization Setup - Atlanta, Georgia
## ✅ Time Synchronization Complete!
The USDA Vision Camera System has been configured for proper time synchronization with Atlanta, Georgia (Eastern Time Zone).
## 🕐 What Was Implemented
### System-Level Time Configuration
- **Timezone**: Set to `America/New_York` (Eastern Time)
- **Current Status**: Eastern Daylight Time (EDT, UTC-4)
- **NTP Sync**: Configured with multiple reliable time servers
- **Hardware Clock**: Synchronized with system time
### Application-Level Timezone Support
- **Timezone-Aware Timestamps**: All recordings use Atlanta time
- **Automatic DST Handling**: Switches between EST/EDT automatically
- **Time Sync Monitoring**: Built-in time synchronization checking
- **Consistent Formatting**: Standardized timestamp formats throughout
## 🔧 Key Features
### 1. Automatic Time Synchronization
```bash
# NTP servers configured:
- time.nist.gov (NIST atomic clock)
- pool.ntp.org (NTP pool)
- time.google.com (Google time)
- time.cloudflare.com (Cloudflare time)
```
### 2. Timezone-Aware Recording Filenames
```
Example: camera1_recording_20250725_213241.avi
Format: {camera}_{type}_{YYYYMMDD_HHMMSS}.avi
Time: Atlanta local time (EDT/EST)
```
### 3. Time Verification Tools
- **Startup Check**: Automatic time sync verification on system start
- **Manual Check**: `python check_time.py` for on-demand verification
- **API Integration**: Time sync status available via REST API
### 4. Comprehensive Logging
```
=== TIME SYNCHRONIZATION STATUS ===
System time: 2025-07-25 21:32:41 EDT
Timezone: EDT (-0400)
Daylight Saving: Yes
Sync status: synchronized
Time difference: 0.10 seconds
=====================================
```
## 🚀 Usage
### Automatic Operation
The system automatically:
- Uses Atlanta time for all timestamps
- Handles daylight saving time transitions
- Monitors time synchronization status
- Logs time-related events
### Manual Verification
```bash
# Check time synchronization
python check_time.py
# Test timezone functions
python test_timezone.py
# View system time status
timedatectl status
```
### API Endpoints
```bash
# System status includes time info
curl http://localhost:8000/system/status
# Example response includes:
{
"system_started": true,
"uptime_seconds": 3600,
"timestamp": "2025-07-25T21:32:41-04:00"
}
```
## 📊 Current Status
### Time Synchronization
-**System Timezone**: America/New_York (EDT)
-**NTP Sync**: Active and synchronized
-**Time Accuracy**: Within 0.1 seconds of atomic time
-**DST Support**: Automatic EST/EDT switching
### Application Integration
-**Recording Timestamps**: Atlanta time zone
-**Log Timestamps**: Timezone-aware logging
-**API Responses**: ISO format with timezone
-**File Naming**: Consistent Atlanta time format
### Monitoring
-**Startup Verification**: Time sync checked on boot
-**Continuous Monitoring**: Built-in sync status tracking
-**Error Detection**: Alerts for time drift issues
-**Manual Tools**: On-demand verification scripts
## 🔍 Technical Details
### Timezone Configuration
```json
{
"system": {
"timezone": "America/New_York"
}
}
```
### Time Sources
1. **Primary**: NIST atomic clock (time.nist.gov)
2. **Secondary**: NTP pool servers (pool.ntp.org)
3. **Backup**: Google/Cloudflare time servers
4. **Fallback**: Local system clock
### File Naming Convention
```
Pattern: {camera_name}_recording_{YYYYMMDD_HHMMSS}.avi
Example: camera1_recording_20250725_213241.avi
Timezone: Always Atlanta local time (EST/EDT)
```
## 🎯 Benefits
### For Operations
- **Consistent Timestamps**: All recordings use Atlanta time
- **Easy Correlation**: Timestamps match local business hours
- **Automatic DST**: No manual timezone adjustments needed
- **Reliable Sync**: Multiple time sources ensure accuracy
### For Analysis
- **Local Time Context**: Recordings timestamped in business timezone
- **Accurate Sequencing**: Precise timing for event correlation
- **Standard Format**: Consistent naming across all recordings
- **Audit Trail**: Complete time synchronization logging
### For Integration
- **Dashboard Ready**: Timezone-aware API responses
- **Database Compatible**: ISO format timestamps with timezone
- **Log Analysis**: Structured time information in logs
- **Monitoring**: Built-in time sync health checks
## 🔧 Maintenance
### Regular Checks
The system automatically:
- Verifies time sync on startup
- Logs time synchronization status
- Monitors for time drift
- Alerts on sync failures
### Manual Maintenance
```bash
# Force time sync
sudo systemctl restart systemd-timesyncd
# Check NTP status
timedatectl show-timesync --all
# Verify timezone
timedatectl status
```
## 📈 Next Steps
The time synchronization is now fully operational. The system will:
1. **Automatically maintain** accurate Atlanta time
2. **Generate timestamped recordings** with local time
3. **Monitor sync status** and alert on issues
4. **Provide timezone-aware** API responses for dashboard integration
All recording files will now have accurate Atlanta timestamps, making it easy to correlate with local business operations and machine schedules.
---
**Time Sync Status**: ✅ **SYNCHRONIZED**
**Timezone**: ✅ **America/New_York (EDT)**
**Accuracy**: ✅ **±0.1 seconds**
**Ready for Production**: ✅ **YES**

View File

@@ -0,0 +1,191 @@
# Camera Video Recorder
A Python script for recording videos from GigE cameras using the provided SDK with custom exposure and gain settings.
## Features
- **List all available cameras** - Automatically detects and displays all connected cameras
- **Custom camera settings** - Set exposure time to 1ms and gain to 3.5x (or custom values)
- **Video recording** - Record videos in AVI format with timestamp filenames
- **Live preview** - Test camera functionality with live preview mode
- **Interactive menu** - User-friendly menu system for all operations
- **Automatic cleanup** - Proper resource management and cleanup
## Requirements
- Python 3.x
- OpenCV (`cv2`)
- NumPy
- Camera SDK (mvsdk) - included in `python demo` directory
- GigE camera connected to the system
## Installation
1. Ensure your GigE camera is connected and properly configured
2. Make sure the `python demo` directory with `mvsdk.py` is present
3. Install required Python packages:
```bash
pip install opencv-python numpy
```
## Usage
### Basic Usage
Run the script:
```bash
python camera_video_recorder.py
```
The script will:
1. Display a welcome message and feature overview
2. List all available cameras
3. Let you select a camera (if multiple are available)
4. Allow you to set custom exposure and gain values
5. Present an interactive menu with options
### Menu Options
1. **Start Recording** - Begin video recording with timestamp filename
2. **List Camera Info** - Display detailed camera information
3. **Test Camera (Live Preview)** - View live camera feed without recording
4. **Exit** - Clean up and exit the program
### Default Settings
- **Exposure Time**: 1.0ms (1000 microseconds)
- **Gain**: 3.5x
- **Video Format**: AVI with XVID codec
- **Frame Rate**: 30 FPS
- **Output Directory**: `videos/` (created automatically)
### Recording Controls
- **Start Recording**: Select option 1 from the menu
- **Stop Recording**: Press 'q' in the preview window
- **Video Files**: Saved as `videos/camera_recording_YYYYMMDD_HHMMSS.avi`
## File Structure
```
camera_video_recorder.py # Main script
python demo/
mvsdk.py # Camera SDK wrapper
(other demo files)
videos/ # Output directory (created automatically)
camera_recording_*.avi # Recorded video files
```
## Script Features
### CameraVideoRecorder Class
- `list_cameras()` - Enumerate and display available cameras
- `initialize_camera()` - Set up camera with custom exposure and gain
- `start_recording()` - Initialize video writer and begin recording
- `stop_recording()` - Stop recording and save video file
- `record_loop()` - Main recording loop with live preview
- `cleanup()` - Proper resource cleanup
### Key Functions
- **Camera Detection**: Automatically finds all connected GigE cameras
- **Settings Validation**: Checks and clamps exposure/gain values to camera limits
- **Frame Processing**: Handles both monochrome and color cameras
- **Windows Compatibility**: Handles frame flipping for Windows systems
- **Error Handling**: Comprehensive error handling and user feedback
## Example Output
```
Camera Video Recorder
====================
This script allows you to:
- List all available cameras
- Record videos with custom exposure (1ms) and gain (3.5x) settings
- Save videos with timestamps
- Stop recording anytime with 'q' key
Found 1 camera(s):
0: GigE Camera Model (GigE) - SN: 12345678
Using camera: GigE Camera Model
Camera Settings:
Enter exposure time in ms (default 1.0): 1.0
Enter gain value (default 3.5): 3.5
Initializing camera with:
- Exposure: 1.0ms
- Gain: 3.5x
Camera type: Color
Set exposure time: 1000.0μs
Set analog gain: 3.50x (range: 1.00 - 16.00)
Camera started successfully
==================================================
Camera Video Recorder Menu
==================================================
1. Start Recording
2. List Camera Info
3. Test Camera (Live Preview)
4. Exit
Select option (1-4): 1
Started recording to: videos/camera_recording_20241223_143022.avi
Frame size: (1920, 1080), FPS: 30.0
Press 'q' to stop recording...
Recording... Press 'q' in the preview window to stop
Recording stopped!
Saved: videos/camera_recording_20241223_143022.avi
Frames recorded: 450
Duration: 15.2 seconds
Average FPS: 29.6
```
## Troubleshooting
### Common Issues
1. **"No cameras found!"**
- Check camera connection
- Verify camera power
- Ensure network configuration for GigE cameras
2. **"SDK initialization failed"**
- Verify `python demo/mvsdk.py` exists
- Check camera drivers are installed
3. **"Camera initialization failed"**
- Camera may be in use by another application
- Try disconnecting and reconnecting the camera
4. **Recording issues**
- Ensure sufficient disk space
- Check write permissions in the output directory
### Performance Tips
- Close other applications using the camera
- Ensure adequate system resources (CPU, RAM)
- Use SSD storage for better write performance
- Adjust frame rate if experiencing dropped frames
## Customization
You can modify the script to:
- Change video codec (currently XVID)
- Adjust target frame rate
- Modify output filename format
- Add additional camera settings
- Change preview window size
## Notes
- Videos are saved in the `videos/` directory with timestamp filenames
- The script handles both monochrome and color cameras automatically
- Frame flipping is handled automatically for Windows systems
- All resources are properly cleaned up on exit

18
api/main.py Normal file
View File

@@ -0,0 +1,18 @@
#!/usr/bin/env python3
"""
Main entry point for the USDA Vision Camera System.
This script starts the complete system including MQTT monitoring, camera management,
and video recording based on machine state changes.
"""
import sys
import os
# Add the current directory to Python path to import our modules
sys.path.insert(0, os.path.dirname(os.path.abspath(__file__)))
from usda_vision_system.main import main
if __name__ == "__main__":
main()

23
api/pyproject.toml Normal file
View File

@@ -0,0 +1,23 @@
[project]
name = "usda-vision-cameras"
version = "0.1.0"
description = "Add your description here"
readme = "README.md"
requires-python = ">=3.11"
dependencies = [
"imageio>=2.37.0",
"matplotlib>=3.10.3",
"numpy>=2.3.2",
"opencv-python>=4.11.0.86",
"paho-mqtt>=2.1.0",
"pillow>=11.3.0",
"tqdm>=4.67.1",
"fastapi>=0.104.0",
"uvicorn>=0.24.0",
"websockets>=12.0",
"requests>=2.31.0",
"pytz>=2023.3",
"ipykernel>=6.30.0",
"httpx>=0.28.1",
"aiofiles>=24.1.0",
]

165
api/reindex_videos.py Normal file
View File

@@ -0,0 +1,165 @@
#!/usr/bin/env python3
"""
Video Reindexing Script for USDA Vision Camera System
This script reindexes existing video files that have "unknown" status,
updating them to "completed" status so they can be streamed.
Usage:
python reindex_videos.py [--dry-run] [--camera CAMERA_NAME]
"""
import os
import sys
import argparse
import logging
from pathlib import Path
from datetime import datetime
# Add the project root to Python path
project_root = Path(__file__).parent
sys.path.insert(0, str(project_root))
from usda_vision_system.core.config import Config
from usda_vision_system.core.state_manager import StateManager
from usda_vision_system.storage.manager import StorageManager
def setup_logging():
"""Setup logging configuration"""
logging.basicConfig(
level=logging.INFO,
format='%(asctime)s - %(name)s - %(levelname)s - %(message)s'
)
return logging.getLogger(__name__)
def reindex_videos(storage_manager: StorageManager, camera_name: str = None, dry_run: bool = False):
"""
Reindex video files with unknown status
Args:
storage_manager: StorageManager instance
camera_name: Optional camera name to filter by
dry_run: If True, only show what would be done without making changes
"""
logger = logging.getLogger(__name__)
logger.info(f"Starting video reindexing (dry_run={dry_run})")
if camera_name:
logger.info(f"Filtering by camera: {camera_name}")
# Get all video files
files = storage_manager.get_recording_files(camera_name=camera_name)
unknown_files = [f for f in files if f.get("status") == "unknown"]
if not unknown_files:
logger.info("No files with 'unknown' status found")
return
logger.info(f"Found {len(unknown_files)} files with 'unknown' status")
updated_count = 0
for file_info in unknown_files:
file_id = file_info["file_id"]
filename = file_info["filename"]
logger.info(f"Processing: {file_id}")
logger.info(f" File: {filename}")
logger.info(f" Current status: {file_info['status']}")
if not dry_run:
# Update the file index directly
if file_id not in storage_manager.file_index["files"]:
# File is not in index, add it
file_path = Path(filename)
if file_path.exists():
stat = file_path.stat()
file_mtime = datetime.fromtimestamp(stat.st_mtime)
new_file_info = {
"camera_name": file_info["camera_name"],
"filename": filename,
"file_id": file_id,
"start_time": file_mtime.isoformat(),
"end_time": file_mtime.isoformat(), # Use file mtime as end time
"file_size_bytes": stat.st_size,
"duration_seconds": None, # Will be extracted later if needed
"machine_trigger": None,
"status": "completed", # Set to completed
"created_at": file_mtime.isoformat()
}
storage_manager.file_index["files"][file_id] = new_file_info
logger.info(f" Added to index with status: completed")
updated_count += 1
else:
logger.warning(f" File does not exist: {filename}")
else:
# File is in index but has unknown status, update it
storage_manager.file_index["files"][file_id]["status"] = "completed"
logger.info(f" Updated status to: completed")
updated_count += 1
else:
logger.info(f" Would update status to: completed")
updated_count += 1
if not dry_run and updated_count > 0:
# Save the updated index
storage_manager._save_file_index()
logger.info(f"Saved updated file index")
logger.info(f"Reindexing complete: {updated_count} files {'would be ' if dry_run else ''}updated")
def main():
"""Main function"""
parser = argparse.ArgumentParser(description="Reindex video files with unknown status")
parser.add_argument("--dry-run", action="store_true",
help="Show what would be done without making changes")
parser.add_argument("--camera", type=str,
help="Only process files for specific camera")
parser.add_argument("--log-level", choices=["DEBUG", "INFO", "WARNING", "ERROR"],
default="INFO", help="Set logging level")
args = parser.parse_args()
# Setup logging
logging.basicConfig(
level=getattr(logging, args.log_level),
format='%(asctime)s - %(name)s - %(levelname)s - %(message)s'
)
logger = logging.getLogger(__name__)
try:
# Initialize system components
logger.info("Initializing USDA Vision Camera System components...")
config = Config()
state_manager = StateManager()
storage_manager = StorageManager(config, state_manager)
logger.info("Components initialized successfully")
# Run reindexing
reindex_videos(
storage_manager=storage_manager,
camera_name=args.camera,
dry_run=args.dry_run
)
if args.dry_run:
logger.info("Dry run completed. Use --no-dry-run to apply changes.")
else:
logger.info("Reindexing completed successfully!")
logger.info("Videos should now be streamable through the API.")
except Exception as e:
logger.error(f"Error during reindexing: {e}")
sys.exit(1)
if __name__ == "__main__":
main()

583
api/requirements.txt Normal file
View File

@@ -0,0 +1,583 @@
# This file was autogenerated by uv via the following command:
# uv export --format requirements-txt
annotated-types==0.7.0 \
--hash=sha256:1f02e8b43a8fbbc3f3e0d4f0f4bfc8131bcb4eebe8849b8e5c773f3a1c582a53 \
--hash=sha256:aff07c09a53a08bc8cfccb9c85b05f1aa9a2a6f23728d790723543408344ce89
# via pydantic
anyio==4.9.0 \
--hash=sha256:673c0c244e15788651a4ff38710fea9675823028a6f08a5eda409e0c9840a028 \
--hash=sha256:9f76d541cad6e36af7beb62e978876f3b41e3e04f2c1fbf0884604c0a9c4d93c
# via starlette
certifi==2025.7.14 \
--hash=sha256:6b31f564a415d79ee77df69d757bb49a5bb53bd9f756cbbe24394ffd6fc1f4b2 \
--hash=sha256:8ea99dbdfaaf2ba2f9bac77b9249ef62ec5218e7c2b2e903378ed5fccf765995
# via requests
charset-normalizer==3.4.2 \
--hash=sha256:0c29de6a1a95f24b9a1aa7aefd27d2487263f00dfd55a77719b530788f75cff7 \
--hash=sha256:0c8c57f84ccfc871a48a47321cfa49ae1df56cd1d965a09abe84066f6853b9c0 \
--hash=sha256:0f5d9ed7f254402c9e7d35d2f5972c9bbea9040e99cd2861bd77dc68263277c7 \
--hash=sha256:1c95a1e2902a8b722868587c0e1184ad5c55631de5afc0eb96bc4b0d738092c0 \
--hash=sha256:289200a18fa698949d2b39c671c2cc7a24d44096784e76614899a7ccf2574b7b \
--hash=sha256:28a1005facc94196e1fb3e82a3d442a9d9110b8434fc1ded7a24a2983c9888d8 \
--hash=sha256:32fc0341d72e0f73f80acb0a2c94216bd704f4f0bce10aedea38f30502b271ff \
--hash=sha256:3fddb7e2c84ac87ac3a947cb4e66d143ca5863ef48e4a5ecb83bd48619e4634e \
--hash=sha256:4a476b06fbcf359ad25d34a057b7219281286ae2477cc5ff5e3f70a246971148 \
--hash=sha256:4e594135de17ab3866138f496755f302b72157d115086d100c3f19370839dd3a \
--hash=sha256:5a9979887252a82fefd3d3ed2a8e3b937a7a809f65dcb1e068b090e165bbe99e \
--hash=sha256:5baececa9ecba31eff645232d59845c07aa030f0c81ee70184a90d35099a0e63 \
--hash=sha256:6b66f92b17849b85cad91259efc341dce9c1af48e2173bf38a85c6329f1033e5 \
--hash=sha256:6c9379d65defcab82d07b2a9dfbfc2e95bc8fe0ebb1b176a3190230a3ef0e07c \
--hash=sha256:7222ffd5e4de8e57e03ce2cef95a4c43c98fcb72ad86909abdfc2c17d227fc1b \
--hash=sha256:7f56930ab0abd1c45cd15be65cc741c28b1c9a34876ce8c17a2fa107810c0af0 \
--hash=sha256:926ca93accd5d36ccdabd803392ddc3e03e6d4cd1cf17deff3b989ab8e9dbcf0 \
--hash=sha256:98f862da73774290f251b9df8d11161b6cf25b599a66baf087c1ffe340e9bfd1 \
--hash=sha256:a370b3e078e418187da8c3674eddb9d983ec09445c99a3a263c2011993522981 \
--hash=sha256:a955b438e62efdf7e0b7b52a64dc5c3396e2634baa62471768a64bc2adb73d5c \
--hash=sha256:aa6af9e7d59f9c12b33ae4e9450619cf2488e2bbe9b44030905877f0b2324980 \
--hash=sha256:aa88ca0b1932e93f2d961bf3addbb2db902198dca337d88c89e1559e066e7645 \
--hash=sha256:aaeeb6a479c7667fbe1099af9617c83aaca22182d6cf8c53966491a0f1b7ffb7 \
--hash=sha256:be1e352acbe3c78727a16a455126d9ff83ea2dfdcbc83148d2982305a04714c2 \
--hash=sha256:bee093bf902e1d8fc0ac143c88902c3dfc8941f7ea1d6a8dd2bcb786d33db03d \
--hash=sha256:cddf7bd982eaa998934a91f69d182aec997c6c468898efe6679af88283b498d3 \
--hash=sha256:cf713fe9a71ef6fd5adf7a79670135081cd4431c2943864757f0fa3a65b1fafd \
--hash=sha256:d41c4d287cfc69060fa91cae9683eacffad989f1a10811995fa309df656ec214 \
--hash=sha256:d524ba3f1581b35c03cb42beebab4a13e6cdad7b36246bd22541fa585a56cccd \
--hash=sha256:daac4765328a919a805fa5e2720f3e94767abd632ae410a9062dff5412bae65a \
--hash=sha256:db4c7bf0e07fc3b7d89ac2a5880a6a8062056801b83ff56d8464b70f65482b6c \
--hash=sha256:dedb8adb91d11846ee08bec4c8236c8549ac721c245678282dcb06b221aab59f \
--hash=sha256:e53efc7c7cee4c1e70661e2e112ca46a575f90ed9ae3fef200f2a25e954f4b28 \
--hash=sha256:e635b87f01ebc977342e2697d05b56632f5f879a4f15955dfe8cef2448b51691 \
--hash=sha256:e70e990b2137b29dc5564715de1e12701815dacc1d056308e2b17e9095372a82 \
--hash=sha256:eba9904b0f38a143592d9fc0e19e2df0fa2e41c3c3745554761c5f6447eedabf \
--hash=sha256:ef8de666d6179b009dce7bcb2ad4c4a779f113f12caf8dc77f0162c29d20490b \
--hash=sha256:efd387a49825780ff861998cd959767800d54f8308936b21025326de4b5a42b9 \
--hash=sha256:f0aa37f3c979cf2546b73e8222bbfa3dc07a641585340179d768068e3455e544 \
--hash=sha256:fcbe676a55d7445b22c10967bceaaf0ee69407fbe0ece4d032b6eb8d4565982a \
--hash=sha256:fdb20a30fe1175ecabed17cbf7812f7b804b8a315a25f24678bcdf120a90077f
# via requests
click==8.2.1 \
--hash=sha256:27c491cc05d968d271d5a1db13e3b5a184636d9d930f148c50b038f0d0646202 \
--hash=sha256:61a3265b914e850b85317d0b3109c7f8cd35a670f963866005d6ef1d5175a12b
# via uvicorn
colorama==0.4.6 ; sys_platform == 'win32' \
--hash=sha256:08695f5cb7ed6e0531a20572697297273c47b8cae5a63ffc6d6ed5c201be6e44 \
--hash=sha256:4f1d9991f5acc0ca119f9d443620b77f9d6b33703e51011c16baf57afb285fc6
# via
# click
# tqdm
contourpy==1.3.2 \
--hash=sha256:0475b1f6604896bc7c53bb070e355e9321e1bc0d381735421a2d2068ec56531f \
--hash=sha256:106fab697af11456fcba3e352ad50effe493a90f893fca6c2ca5c033820cea92 \
--hash=sha256:15ce6ab60957ca74cff444fe66d9045c1fd3e92c8936894ebd1f3eef2fff075f \
--hash=sha256:1c48188778d4d2f3d48e4643fb15d8608b1d01e4b4d6b0548d9b336c28fc9b6f \
--hash=sha256:3859783aefa2b8355697f16642695a5b9792e7a46ab86da1118a4a23a51a33d7 \
--hash=sha256:3d80b2c0300583228ac98d0a927a1ba6a2ba6b8a742463c564f1d419ee5b211e \
--hash=sha256:3f9e896f447c5c8618f1edb2bafa9a4030f22a575ec418ad70611450720b5b08 \
--hash=sha256:434f0adf84911c924519d2b08fc10491dd282b20bdd3fa8f60fd816ea0b48841 \
--hash=sha256:49b65a95d642d4efa8f64ba12558fcb83407e58a2dfba9d796d77b63ccfcaff5 \
--hash=sha256:4caf2bcd2969402bf77edc4cb6034c7dd7c0803213b3523f111eb7460a51b8d2 \
--hash=sha256:532fd26e715560721bb0d5fc7610fce279b3699b018600ab999d1be895b09415 \
--hash=sha256:5ebac872ba09cb8f2131c46b8739a7ff71de28a24c869bcad554477eb089a878 \
--hash=sha256:5f5964cdad279256c084b69c3f412b7801e15356b16efa9d78aa974041903da0 \
--hash=sha256:65a887a6e8c4cd0897507d814b14c54a8c2e2aa4ac9f7686292f9769fcf9a6ab \
--hash=sha256:6a37a2fb93d4df3fc4c0e363ea4d16f83195fc09c891bc8ce072b9d084853445 \
--hash=sha256:70771a461aaeb335df14deb6c97439973d253ae70660ca085eec25241137ef43 \
--hash=sha256:71e2bd4a1c4188f5c2b8d274da78faab884b59df20df63c34f74aa1813c4427c \
--hash=sha256:745b57db7758f3ffc05a10254edd3182a2a83402a89c00957a8e8a22f5582823 \
--hash=sha256:78e9253c3de756b3f6a5174d024c4835acd59eb3f8e2ca13e775dbffe1558f69 \
--hash=sha256:82199cb78276249796419fe36b7386bd8d2cc3f28b3bc19fe2454fe2e26c4c15 \
--hash=sha256:8b7fc0cd78ba2f4695fd0a6ad81a19e7e3ab825c31b577f384aa9d7817dc3bef \
--hash=sha256:8c5acb8dddb0752bf252e01a3035b21443158910ac16a3b0d20e7fed7d534ce5 \
--hash=sha256:8c942a01d9163e2e5cfb05cb66110121b8d07ad438a17f9e766317bcb62abf73 \
--hash=sha256:90df94c89a91b7362e1142cbee7568f86514412ab8a2c0d0fca72d7e91b62912 \
--hash=sha256:970e9173dbd7eba9b4e01aab19215a48ee5dd3f43cef736eebde064a171f89a5 \
--hash=sha256:977e98a0e0480d3fe292246417239d2d45435904afd6d7332d8455981c408b85 \
--hash=sha256:b6945942715a034c671b7fc54f9588126b0b8bf23db2696e3ca8328f3ff0ab54 \
--hash=sha256:b7cd50c38f500bbcc9b6a46643a40e0913673f869315d8e70de0438817cb7773 \
--hash=sha256:c49f73e61f1f774650a55d221803b101d966ca0c5a2d6d5e4320ec3997489441 \
--hash=sha256:c66c4906cdbc50e9cba65978823e6e00b45682eb09adbb78c9775b74eb222422 \
--hash=sha256:c6c4639a9c22230276b7bffb6a850dfc8258a2521305e1faefe804d006b2e532 \
--hash=sha256:c85bb486e9be652314bb5b9e2e3b0d1b2e643d5eec4992c0fbe8ac71775da739 \
--hash=sha256:cc829960f34ba36aad4302e78eabf3ef16a3a100863f0d4eeddf30e8a485a03b \
--hash=sha256:d0e589ae0d55204991450bb5c23f571c64fe43adaa53f93fc902a84c96f52fe1 \
--hash=sha256:d14f12932a8d620e307f715857107b1d1845cc44fdb5da2bc8e850f5ceba9f87 \
--hash=sha256:d32530b534e986374fc19eaa77fcb87e8a99e5431499949b828312bdcd20ac52 \
--hash=sha256:d6658ccc7251a4433eebd89ed2672c2ed96fba367fd25ca9512aa92a4b46c4f1 \
--hash=sha256:d91a3ccc7fea94ca0acab82ceb77f396d50a1f67412efe4c526f5d20264e6ecd \
--hash=sha256:de39db2604ae755316cb5967728f4bea92685884b1e767b7c24e983ef5f771cb \
--hash=sha256:de425af81b6cea33101ae95ece1f696af39446db9682a0b56daaa48cfc29f38f \
--hash=sha256:e1578f7eafce927b168752ed7e22646dad6cd9bca673c60bff55889fa236ebf9 \
--hash=sha256:e298e7e70cf4eb179cc1077be1c725b5fd131ebc81181bf0c03525c8abc297fd \
--hash=sha256:eab0f6db315fa4d70f1d8ab514e527f0366ec021ff853d7ed6a2d33605cf4b83 \
--hash=sha256:f26b383144cf2d2c29f01a1e8170f50dacf0eac02d64139dcd709a8ac4eb3cfe
# via matplotlib
cycler==0.12.1 \
--hash=sha256:85cef7cff222d8644161529808465972e51340599459b8ac3ccbac5a854e0d30 \
--hash=sha256:88bb128f02ba341da8ef447245a9e138fae777f6a23943da4540077d3601eb1c
# via matplotlib
fastapi==0.116.1 \
--hash=sha256:c46ac7c312df840f0c9e220f7964bada936781bc4e2e6eb71f1c4d7553786565 \
--hash=sha256:ed52cbf946abfd70c5a0dccb24673f0670deeb517a88b3544d03c2a6bf283143
# via usda-vision-cameras
fonttools==4.59.0 \
--hash=sha256:241313683afd3baacb32a6bd124d0bce7404bc5280e12e291bae1b9bba28711d \
--hash=sha256:26731739daa23b872643f0e4072d5939960237d540c35c14e6a06d47d71ca8fe \
--hash=sha256:31003b6a10f70742a63126b80863ab48175fb8272a18ca0846c0482968f0588e \
--hash=sha256:332bfe685d1ac58ca8d62b8d6c71c2e52a6c64bc218dc8f7825c9ea51385aa01 \
--hash=sha256:37c377f7cb2ab2eca8a0b319c68146d34a339792f9420fca6cd49cf28d370705 \
--hash=sha256:37e01c6ec0c98599778c2e688350d624fa4770fbd6144551bd5e032f1199171c \
--hash=sha256:401b1941ce37e78b8fd119b419b617277c65ae9417742a63282257434fd68ea2 \
--hash=sha256:4536f2695fe5c1ffb528d84a35a7d3967e5558d2af58b4775e7ab1449d65767b \
--hash=sha256:51ab1ff33c19e336c02dee1e9fd1abd974a4ca3d8f7eef2a104d0816a241ce97 \
--hash=sha256:57bb7e26928573ee7c6504f54c05860d867fd35e675769f3ce01b52af38d48e2 \
--hash=sha256:6770d7da00f358183d8fd5c4615436189e4f683bdb6affb02cad3d221d7bb757 \
--hash=sha256:6801aeddb6acb2c42eafa45bc1cb98ba236871ae6f33f31e984670b749a8e58e \
--hash=sha256:70d6b3ceaa9cc5a6ac52884f3b3d9544e8e231e95b23f138bdb78e6d4dc0eae3 \
--hash=sha256:78813b49d749e1bb4db1c57f2d4d7e6db22c253cb0a86ad819f5dc197710d4b2 \
--hash=sha256:841b2186adce48903c0fef235421ae21549020eca942c1da773ac380b056ab3c \
--hash=sha256:84fc186980231a287b28560d3123bd255d3c6b6659828c642b4cf961e2b923d0 \
--hash=sha256:885bde7d26e5b40e15c47bd5def48b38cbd50830a65f98122a8fb90962af7cd1 \
--hash=sha256:9bcc1e77fbd1609198966ded6b2a9897bd6c6bcbd2287a2fc7d75f1a254179c5 \
--hash=sha256:a408c3c51358c89b29cfa5317cf11518b7ce5de1717abb55c5ae2d2921027de6 \
--hash=sha256:a9bf8adc9e1f3012edc8f09b08336272aec0c55bc677422273e21280db748f7c \
--hash=sha256:be392ec3529e2f57faa28709d60723a763904f71a2b63aabe14fee6648fe3b14 \
--hash=sha256:d3972b13148c1d1fbc092b27678a33b3080d1ac0ca305742b0119b75f9e87e38 \
--hash=sha256:efd7e6660674e234e29937bc1481dceb7e0336bfae75b856b4fb272b5093c5d4 \
--hash=sha256:f9b3a78f69dcbd803cf2fb3f972779875b244c1115481dfbdd567b2c22b31f6b \
--hash=sha256:fa39475eaccb98f9199eccfda4298abaf35ae0caec676ffc25b3a5e224044464 \
--hash=sha256:fbce6dae41b692a5973d0f2158f782b9ad05babc2c2019a970a1094a23909b1b
# via matplotlib
h11==0.16.0 \
--hash=sha256:4e35b956cf45792e4caa5885e69fba00bdbc6ffafbfa020300e549b208ee5ff1 \
--hash=sha256:63cf8bbe7522de3bf65932fda1d9c2772064ffb3dae62d55932da54b31cb6c86
# via uvicorn
idna==3.10 \
--hash=sha256:12f65c9b470abda6dc35cf8e63cc574b1c52b11df2c86030af0ac09b01b13ea9 \
--hash=sha256:946d195a0d259cbba61165e88e65941f16e9b36ea6ddb97f00452bae8b1287d3
# via
# anyio
# requests
imageio==2.37.0 \
--hash=sha256:11efa15b87bc7871b61590326b2d635439acc321cf7f8ce996f812543ce10eed \
--hash=sha256:71b57b3669666272c818497aebba2b4c5f20d5b37c81720e5e1a56d59c492996
# via usda-vision-cameras
kiwisolver==1.4.8 \
--hash=sha256:01c3d31902c7db5fb6182832713d3b4122ad9317c2c5877d0539227d96bb2e50 \
--hash=sha256:085940635c62697391baafaaeabdf3dd7a6c3643577dde337f4d66eba021b2b8 \
--hash=sha256:08e77738ed7538f036cd1170cbed942ef749137b1311fa2bbe2a7fda2f6bf3cc \
--hash=sha256:111793b232842991be367ed828076b03d96202c19221b5ebab421ce8bcad016f \
--hash=sha256:11e1022b524bd48ae56c9b4f9296bce77e15a2e42a502cceba602f804b32bb79 \
--hash=sha256:151dffc4865e5fe6dafce5480fab84f950d14566c480c08a53c663a0020504b6 \
--hash=sha256:16523b40aab60426ffdebe33ac374457cf62863e330a90a0383639ce14bf44b2 \
--hash=sha256:1c8ceb754339793c24aee1c9fb2485b5b1f5bb1c2c214ff13368431e51fc9a09 \
--hash=sha256:23454ff084b07ac54ca8be535f4174170c1094a4cff78fbae4f73a4bcc0d4dab \
--hash=sha256:23d5f023bdc8c7e54eb65f03ca5d5bb25b601eac4d7f1a042888a1f45237987e \
--hash=sha256:257af1622860e51b1a9d0ce387bf5c2c4f36a90594cb9514f55b074bcc787cfc \
--hash=sha256:291331973c64bb9cce50bbe871fb2e675c4331dab4f31abe89f175ad7679a4d7 \
--hash=sha256:2f0121b07b356a22fb0414cec4666bbe36fd6d0d759db3d37228f496ed67c880 \
--hash=sha256:3452046c37c7692bd52b0e752b87954ef86ee2224e624ef7ce6cb21e8c41cc1b \
--hash=sha256:34d142fba9c464bc3bbfeff15c96eab0e7310343d6aefb62a79d51421fcc5f1b \
--hash=sha256:36dbbfd34838500a31f52c9786990d00150860e46cd5041386f217101350f0d3 \
--hash=sha256:370fd2df41660ed4e26b8c9d6bbcad668fbe2560462cba151a721d49e5b6628c \
--hash=sha256:3a96c0e790ee875d65e340ab383700e2b4891677b7fcd30a699146f9384a2bb0 \
--hash=sha256:3b9b4d2892fefc886f30301cdd80debd8bb01ecdf165a449eb6e78f79f0fabd6 \
--hash=sha256:3cd3bc628b25f74aedc6d374d5babf0166a92ff1317f46267f12d2ed54bc1d30 \
--hash=sha256:3ddc373e0eef45b59197de815b1b28ef89ae3955e7722cc9710fb91cd77b7f47 \
--hash=sha256:54a62808ac74b5e55a04a408cda6156f986cefbcf0ada13572696b507cc92fa1 \
--hash=sha256:577facaa411c10421314598b50413aa1ebcf5126f704f1e5d72d7e4e9f020d90 \
--hash=sha256:68269e60ee4929893aad82666821aaacbd455284124817af45c11e50a4b42e3c \
--hash=sha256:69b5637c3f316cab1ec1c9a12b8c5f4750a4c4b71af9157645bf32830e39c03a \
--hash=sha256:7506488470f41169b86d8c9aeff587293f530a23a23a49d6bc64dab66bedc71e \
--hash=sha256:768cade2c2df13db52475bd28d3a3fac8c9eff04b0e9e2fda0f3760f20b3f7fc \
--hash=sha256:77e6f57a20b9bd4e1e2cedda4d0b986ebd0216236f0106e55c28aea3d3d69b16 \
--hash=sha256:782bb86f245ec18009890e7cb8d13a5ef54dcf2ebe18ed65f795e635a96a1c6a \
--hash=sha256:7a3ad337add5148cf51ce0b55642dc551c0b9d6248458a757f98796ca7348712 \
--hash=sha256:7e9a60b50fe8b2ec6f448fe8d81b07e40141bfced7f896309df271a0b92f80f3 \
--hash=sha256:84a2f830d42707de1d191b9490ac186bf7997a9495d4e9072210a1296345f7dc \
--hash=sha256:856b269c4d28a5c0d5e6c1955ec36ebfd1651ac00e1ce0afa3e28da95293b561 \
--hash=sha256:858416b7fb777a53f0c59ca08190ce24e9abbd3cffa18886a5781b8e3e26f65d \
--hash=sha256:87b287251ad6488e95b4f0b4a79a6d04d3ea35fde6340eb38fbd1ca9cd35bbbc \
--hash=sha256:893f5525bb92d3d735878ec00f781b2de998333659507d29ea4466208df37bed \
--hash=sha256:918139571133f366e8362fa4a297aeba86c7816b7ecf0bc79168080e2bd79957 \
--hash=sha256:99cea8b9dd34ff80c521aef46a1dddb0dcc0283cf18bde6d756f1e6f31772165 \
--hash=sha256:a17b7c4f5b2c51bb68ed379defd608a03954a1845dfed7cc0117f1cc8a9b7fd2 \
--hash=sha256:a3c44cb68861de93f0c4a8175fbaa691f0aa22550c331fefef02b618a9dcb476 \
--hash=sha256:a4d3601908c560bdf880f07d94f31d734afd1bb71e96585cace0e38ef44c6d84 \
--hash=sha256:a5ce1e481a74b44dd5e92ff03ea0cb371ae7a0268318e202be06c8f04f4f1246 \
--hash=sha256:a66f60f8d0c87ab7f59b6fb80e642ebb29fec354a4dfad687ca4092ae69d04f4 \
--hash=sha256:b21dbe165081142b1232a240fc6383fd32cdd877ca6cc89eab93e5f5883e1c25 \
--hash=sha256:b47a465040146981dc9db8647981b8cb96366fbc8d452b031e4f8fdffec3f26d \
--hash=sha256:b83dc6769ddbc57613280118fb4ce3cd08899cc3369f7d0e0fab518a7cf37fdb \
--hash=sha256:bade438f86e21d91e0cf5dd7c0ed00cda0f77c8c1616bd83f9fc157fa6760d31 \
--hash=sha256:be4816dc51c8a471749d664161b434912eee82f2ea66bd7628bd14583a833e85 \
--hash=sha256:c2b9a96e0f326205af81a15718a9073328df1173a2619a68553decb7097fd5d7 \
--hash=sha256:c5020c83e8553f770cb3b5fc13faac40f17e0b205bd237aebd21d53d733adb03 \
--hash=sha256:cc978a80a0db3a66d25767b03688f1147a69e6237175c0f4ffffaaedf744055a \
--hash=sha256:d47cfb2650f0e103d4bf68b0b5804c68da97272c84bb12850d877a95c056bd67 \
--hash=sha256:d6af5e8815fd02997cb6ad9bbed0ee1e60014438ee1a5c2444c96f87b8843502 \
--hash=sha256:d6d6bd87df62c27d4185de7c511c6248040afae67028a8a22012b010bc7ad062 \
--hash=sha256:dace81d28c787956bfbfbbfd72fdcef014f37d9b48830829e488fdb32b49d954 \
--hash=sha256:e063ef9f89885a1d68dd8b2e18f5ead48653176d10a0e324e3b0030e3a69adeb \
--hash=sha256:eaa973f1e05131de5ff3569bbba7f5fd07ea0595d3870ed4a526d486fe57fa1b \
--hash=sha256:ed33ca2002a779a2e20eeb06aea7721b6e47f2d4b8a8ece979d8ba9e2a167e34 \
--hash=sha256:fc2ace710ba7c1dfd1a3b42530b62b9ceed115f19a1656adefce7b1782a37794
# via matplotlib
matplotlib==3.10.3 \
--hash=sha256:0ab1affc11d1f495ab9e6362b8174a25afc19c081ba5b0775ef00533a4236eea \
--hash=sha256:0ef061f74cd488586f552d0c336b2f078d43bc00dc473d2c3e7bfee2272f3fa8 \
--hash=sha256:151d89cb8d33cb23345cd12490c76fd5d18a56581a16d950b48c6ff19bb2ab93 \
--hash=sha256:24853dad5b8c84c8c2390fc31ce4858b6df504156893292ce8092d190ef8151d \
--hash=sha256:2a818d8bdcafa7ed2eed74487fdb071c09c1ae24152d403952adad11fa3c65b4 \
--hash=sha256:2f82d2c5bb7ae93aaaa4cd42aca65d76ce6376f83304fa3a630b569aca274df0 \
--hash=sha256:3ddbba06a6c126e3301c3d272a99dcbe7f6c24c14024e80307ff03791a5f294e \
--hash=sha256:4f23ffe95c5667ef8a2b56eea9b53db7f43910fa4a2d5472ae0f72b64deab4d5 \
--hash=sha256:55e46cbfe1f8586adb34f7587c3e4f7dedc59d5226719faf6cb54fc24f2fd52d \
--hash=sha256:68f7878214d369d7d4215e2a9075fef743be38fa401d32e6020bab2dfabaa566 \
--hash=sha256:6c7818292a5cc372a2dc4c795e5c356942eb8350b98ef913f7fda51fe175ac5d \
--hash=sha256:748302b33ae9326995b238f606e9ed840bf5886ebafcb233775d946aa8107a15 \
--hash=sha256:748ebc3470c253e770b17d8b0557f0aa85cf8c63fd52f1a61af5b27ec0b7ffee \
--hash=sha256:7c5f0283da91e9522bdba4d6583ed9d5521566f63729ffb68334f86d0bb98049 \
--hash=sha256:9f2efccc8dcf2b86fc4ee849eea5dcaecedd0773b30f47980dc0cbeabf26ec84 \
--hash=sha256:a80fcccbef63302c0efd78042ea3c2436104c5b1a4d3ae20f864593696364ac7 \
--hash=sha256:c0b9849a17bce080a16ebcb80a7b714b5677d0ec32161a2cc0a8e5a6030ae220 \
--hash=sha256:c26dd9834e74d164d06433dc7be5d75a1e9890b926b3e57e74fa446e1a62c3e2 \
--hash=sha256:cf37d8c6ef1a48829443e8ba5227b44236d7fcaf7647caa3178a4ff9f7a5be05 \
--hash=sha256:d96985d14dc5f4a736bbea4b9de9afaa735f8a0fc2ca75be2fa9e96b2097369d \
--hash=sha256:dbed9917b44070e55640bd13419de83b4c918e52d97561544814ba463811cbc7 \
--hash=sha256:ed70453fd99733293ace1aec568255bc51c6361cb0da94fa5ebf0649fdb2150a \
--hash=sha256:eef6ed6c03717083bc6d69c2d7ee8624205c29a8e6ea5a31cd3492ecdbaee1e1 \
--hash=sha256:f6929fc618cb6db9cb75086f73b3219bbb25920cb24cee2ea7a12b04971a4158 \
--hash=sha256:fdfa07c0ec58035242bc8b2c8aae37037c9a886370eef6850703d7583e19964b
# via usda-vision-cameras
numpy==2.3.2 \
--hash=sha256:07b62978075b67eee4065b166d000d457c82a1efe726cce608b9db9dd66a73a5 \
--hash=sha256:087ffc25890d89a43536f75c5fe8770922008758e8eeeef61733957041ed2f9b \
--hash=sha256:092aeb3449833ea9c0bf0089d70c29ae480685dd2377ec9cdbbb620257f84631 \
--hash=sha256:095737ed986e00393ec18ec0b21b47c22889ae4b0cd2d5e88342e08b01141f58 \
--hash=sha256:0a4f2021a6da53a0d580d6ef5db29947025ae8b35b3250141805ea9a32bbe86b \
--hash=sha256:103ea7063fa624af04a791c39f97070bf93b96d7af7eb23530cd087dc8dbe9dc \
--hash=sha256:11e58218c0c46c80509186e460d79fbdc9ca1eb8d8aee39d8f2dc768eb781089 \
--hash=sha256:122bf5ed9a0221b3419672493878ba4967121514b1d7d4656a7580cd11dddcbf \
--hash=sha256:14a91ebac98813a49bc6aa1a0dfc09513dcec1d97eaf31ca21a87221a1cdcb15 \
--hash=sha256:1f91e5c028504660d606340a084db4b216567ded1056ea2b4be4f9d10b67197f \
--hash=sha256:20b8200721840f5621b7bd03f8dcd78de33ec522fc40dc2641aa09537df010c3 \
--hash=sha256:240259d6564f1c65424bcd10f435145a7644a65a6811cfc3201c4a429ba79170 \
--hash=sha256:2738534837c6a1d0c39340a190177d7d66fdf432894f469728da901f8f6dc910 \
--hash=sha256:27c9f90e7481275c7800dc9c24b7cc40ace3fdb970ae4d21eaff983a32f70c91 \
--hash=sha256:293b2192c6bcce487dbc6326de5853787f870aeb6c43f8f9c6496db5b1781e45 \
--hash=sha256:2c3271cc4097beb5a60f010bcc1cc204b300bb3eafb4399376418a83a1c6373c \
--hash=sha256:2f4f0215edb189048a3c03bd5b19345bdfa7b45a7a6f72ae5945d2a28272727f \
--hash=sha256:3dcf02866b977a38ba3ec10215220609ab9667378a9e2150615673f3ffd6c73b \
--hash=sha256:4209f874d45f921bde2cff1ffcd8a3695f545ad2ffbef6d3d3c6768162efab89 \
--hash=sha256:448a66d052d0cf14ce9865d159bfc403282c9bc7bb2a31b03cc18b651eca8b1a \
--hash=sha256:4ae6863868aaee2f57503c7a5052b3a2807cf7a3914475e637a0ecd366ced220 \
--hash=sha256:4d002ecf7c9b53240be3bb69d80f86ddbd34078bae04d87be81c1f58466f264e \
--hash=sha256:4e6ecfeddfa83b02318f4d84acf15fbdbf9ded18e46989a15a8b6995dfbf85ab \
--hash=sha256:508b0eada3eded10a3b55725b40806a4b855961040180028f52580c4729916a2 \
--hash=sha256:546aaf78e81b4081b2eba1d105c3b34064783027a06b3ab20b6eba21fb64132b \
--hash=sha256:572d5512df5470f50ada8d1972c5f1082d9a0b7aa5944db8084077570cf98370 \
--hash=sha256:5ad4ebcb683a1f99f4f392cc522ee20a18b2bb12a2c1c42c3d48d5a1adc9d3d2 \
--hash=sha256:66459dccc65d8ec98cc7df61307b64bf9e08101f9598755d42d8ae65d9a7a6ee \
--hash=sha256:6936aff90dda378c09bea075af0d9c675fe3a977a9d2402f95a87f440f59f619 \
--hash=sha256:69779198d9caee6e547adb933941ed7520f896fd9656834c300bdf4dd8642712 \
--hash=sha256:6f1ae3dcb840edccc45af496f312528c15b1f79ac318169d094e85e4bb35fdf1 \
--hash=sha256:71669b5daae692189540cffc4c439468d35a3f84f0c88b078ecd94337f6cb0ec \
--hash=sha256:72c6df2267e926a6d5286b0a6d556ebe49eae261062059317837fda12ddf0c1a \
--hash=sha256:72dbebb2dcc8305c431b2836bcc66af967df91be793d63a24e3d9b741374c450 \
--hash=sha256:754d6755d9a7588bdc6ac47dc4ee97867271b17cee39cb87aef079574366db0a \
--hash=sha256:76c3e9501ceb50b2ff3824c3589d5d1ab4ac857b0ee3f8f49629d0de55ecf7c2 \
--hash=sha256:7a0e27186e781a69959d0230dd9909b5e26024f8da10683bd6344baea1885168 \
--hash=sha256:7d6e390423cc1f76e1b8108c9b6889d20a7a1f59d9a60cac4a050fa734d6c1e2 \
--hash=sha256:8145dd6d10df13c559d1e4314df29695613575183fa2e2d11fac4c208c8a1f73 \
--hash=sha256:8446acd11fe3dc1830568c941d44449fd5cb83068e5c70bd5a470d323d448296 \
--hash=sha256:852ae5bed3478b92f093e30f785c98e0cb62fa0a939ed057c31716e18a7a22b9 \
--hash=sha256:87c930d52f45df092f7578889711a0768094debf73cfcde105e2d66954358125 \
--hash=sha256:8b1224a734cd509f70816455c3cffe13a4f599b1bf7130f913ba0e2c0b2006c0 \
--hash=sha256:8dc082ea901a62edb8f59713c6a7e28a85daddcb67454c839de57656478f5b19 \
--hash=sha256:906a30249315f9c8e17b085cc5f87d3f369b35fedd0051d4a84686967bdbbd0b \
--hash=sha256:938065908d1d869c7d75d8ec45f735a034771c6ea07088867f713d1cd3bbbe4f \
--hash=sha256:9c144440db4bf3bb6372d2c3e49834cc0ff7bb4c24975ab33e01199e645416f2 \
--hash=sha256:9e196ade2400c0c737d93465327d1ae7c06c7cb8a1756121ebf54b06ca183c7f \
--hash=sha256:a3ef07ec8cbc8fc9e369c8dcd52019510c12da4de81367d8b20bc692aa07573a \
--hash=sha256:a7af9ed2aa9ec5950daf05bb11abc4076a108bd3c7db9aa7251d5f107079b6a6 \
--hash=sha256:a9f66e7d2b2d7712410d3bc5684149040ef5f19856f20277cd17ea83e5006286 \
--hash=sha256:aa098a5ab53fa407fded5870865c6275a5cd4101cfdef8d6fafc48286a96e981 \
--hash=sha256:af58de8745f7fa9ca1c0c7c943616c6fe28e75d0c81f5c295810e3c83b5be92f \
--hash=sha256:b05a89f2fb84d21235f93de47129dd4f11c16f64c87c33f5e284e6a3a54e43f2 \
--hash=sha256:b5e40e80299607f597e1a8a247ff8d71d79c5b52baa11cc1cce30aa92d2da6e0 \
--hash=sha256:b9d0878b21e3918d76d2209c924ebb272340da1fb51abc00f986c258cd5e957b \
--hash=sha256:bc3186bea41fae9d8e90c2b4fb5f0a1f5a690682da79b92574d63f56b529080b \
--hash=sha256:c63d95dc9d67b676e9108fe0d2182987ccb0f11933c1e8959f42fa0da8d4fa56 \
--hash=sha256:c771cfac34a4f2c0de8e8c97312d07d64fd8f8ed45bc9f5726a7e947270152b5 \
--hash=sha256:c8d9727f5316a256425892b043736d63e89ed15bbfe6556c5ff4d9d4448ff3b3 \
--hash=sha256:cbc95b3813920145032412f7e33d12080f11dc776262df1712e1638207dde9e8 \
--hash=sha256:cefc2219baa48e468e3db7e706305fcd0c095534a192a08f31e98d83a7d45fb0 \
--hash=sha256:d95f59afe7f808c103be692175008bab926b59309ade3e6d25009e9a171f7036 \
--hash=sha256:dd937f088a2df683cbb79dda9a772b62a3e5a8a7e76690612c2737f38c6ef1b6 \
--hash=sha256:de6ea4e5a65d5a90c7d286ddff2b87f3f4ad61faa3db8dabe936b34c2275b6f8 \
--hash=sha256:e0486a11ec30cdecb53f184d496d1c6a20786c81e55e41640270130056f8ee48 \
--hash=sha256:ee807923782faaf60d0d7331f5e86da7d5e3079e28b291973c545476c2b00d07 \
--hash=sha256:efc81393f25f14d11c9d161e46e6ee348637c0a1e8a54bf9dedc472a3fae993b \
--hash=sha256:f0a1a8476ad77a228e41619af2fa9505cf69df928e9aaa165746584ea17fed2b \
--hash=sha256:f75018be4980a7324edc5930fe39aa391d5734531b1926968605416ff58c332d \
--hash=sha256:f92d6c2a8535dc4fe4419562294ff957f83a16ebdec66df0805e473ffaad8bd0 \
--hash=sha256:fb1752a3bb9a3ad2d6b090b88a9a0ae1cd6f004ef95f75825e2f382c183b2097 \
--hash=sha256:fc927d7f289d14f5e037be917539620603294454130b6de200091e23d27dc9be \
--hash=sha256:fed5527c4cf10f16c6d0b6bee1f89958bccb0ad2522c8cadc2efd318bcd545f5
# via
# contourpy
# imageio
# matplotlib
# opencv-python
# usda-vision-cameras
opencv-python==4.11.0.86 \
--hash=sha256:03d60ccae62304860d232272e4a4fda93c39d595780cb40b161b310244b736a4 \
--hash=sha256:085ad9b77c18853ea66283e98affefe2de8cc4c1f43eda4c100cf9b2721142ec \
--hash=sha256:1b92ae2c8852208817e6776ba1ea0d6b1e0a1b5431e971a2a0ddd2a8cc398202 \
--hash=sha256:432f67c223f1dc2824f5e73cdfcd9db0efc8710647d4e813012195dc9122a52a \
--hash=sha256:6b02611523803495003bd87362db3e1d2a0454a6a63025dc6658a9830570aa0d \
--hash=sha256:810549cb2a4aedaa84ad9a1c92fbfdfc14090e2749cedf2c1589ad8359aa169b \
--hash=sha256:9d05ef13d23fe97f575153558653e2d6e87103995d54e6a35db3f282fe1f9c66
# via usda-vision-cameras
packaging==25.0 \
--hash=sha256:29572ef2b1f17581046b3a2227d5c611fb25ec70ca1ba8554b24b0e69331a484 \
--hash=sha256:d443872c98d677bf60f6a1f2f8c1cb748e8fe762d2bf9d3148b5599295b0fc4f
# via matplotlib
paho-mqtt==2.1.0 \
--hash=sha256:12d6e7511d4137555a3f6ea167ae846af2c7357b10bc6fa4f7c3968fc1723834 \
--hash=sha256:6db9ba9b34ed5bc6b6e3812718c7e06e2fd7444540df2455d2c51bd58808feee
# via usda-vision-cameras
pillow==11.3.0 \
--hash=sha256:023f6d2d11784a465f09fd09a34b150ea4672e85fb3d05931d89f373ab14abb2 \
--hash=sha256:02a723e6bf909e7cea0dac1b0e0310be9d7650cd66222a5f1c571455c0a45214 \
--hash=sha256:05f6ecbeff5005399bb48d198f098a9b4b6bdf27b8487c7f38ca16eeb070cd59 \
--hash=sha256:068d9c39a2d1b358eb9f245ce7ab1b5c3246c7c8c7d9ba58cfa5b43146c06e50 \
--hash=sha256:0743841cabd3dba6a83f38a92672cccbd69af56e3e91777b0ee7f4dba4385632 \
--hash=sha256:0b275ff9b04df7b640c59ec5a3cb113eefd3795a8df80bac69646ef699c6981a \
--hash=sha256:0bce5c4fd0921f99d2e858dc4d4d64193407e1b99478bc5cacecba2311abde51 \
--hash=sha256:1019b04af07fc0163e2810167918cb5add8d74674b6267616021ab558dc98ced \
--hash=sha256:106064daa23a745510dabce1d84f29137a37224831d88eb4ce94bb187b1d7e5f \
--hash=sha256:118ca10c0d60b06d006be10a501fd6bbdfef559251ed31b794668ed569c87e12 \
--hash=sha256:13f87d581e71d9189ab21fe0efb5a23e9f28552d5be6979e84001d3b8505abe8 \
--hash=sha256:155658efb5e044669c08896c0c44231c5e9abcaadbc5cd3648df2f7c0b96b9a6 \
--hash=sha256:1904e1264881f682f02b7f8167935cce37bc97db457f8e7849dc3a6a52b99580 \
--hash=sha256:1a992e86b0dd7aeb1f053cd506508c0999d710a8f07b4c791c63843fc6a807ac \
--hash=sha256:1c627742b539bba4309df89171356fcb3cc5a9178355b2727d1b74a6cf155fbd \
--hash=sha256:1cd110edf822773368b396281a2293aeb91c90a2db00d78ea43e7e861631b722 \
--hash=sha256:1f85acb69adf2aaee8b7da124efebbdb959a104db34d3a2cb0f3793dbae422a8 \
--hash=sha256:2465a69cf967b8b49ee1b96d76718cd98c4e925414ead59fdf75cf0fd07df673 \
--hash=sha256:2a3117c06b8fb646639dce83694f2f9eac405472713fcb1ae887469c0d4f6788 \
--hash=sha256:2aceea54f957dd4448264f9bf40875da0415c83eb85f55069d89c0ed436e3542 \
--hash=sha256:2d6fcc902a24ac74495df63faad1884282239265c6839a0a6416d33faedfae7e \
--hash=sha256:30807c931ff7c095620fe04448e2c2fc673fcbb1ffe2a7da3fb39613489b1ddd \
--hash=sha256:30b7c02f3899d10f13d7a48163c8969e4e653f8b43416d23d13d1bbfdc93b9f8 \
--hash=sha256:3828ee7586cd0b2091b6209e5ad53e20d0649bbe87164a459d0676e035e8f523 \
--hash=sha256:3e184b2f26ff146363dd07bde8b711833d7b0202e27d13540bfe2e35a323a809 \
--hash=sha256:41342b64afeba938edb034d122b2dda5db2139b9a4af999729ba8818e0056477 \
--hash=sha256:41742638139424703b4d01665b807c6468e23e699e8e90cffefe291c5832b027 \
--hash=sha256:45dfc51ac5975b938e9809451c51734124e73b04d0f0ac621649821a63852e7b \
--hash=sha256:465b9e8844e3c3519a983d58b80be3f668e2a7a5db97f2784e7079fbc9f9822c \
--hash=sha256:4c834a3921375c48ee6b9624061076bc0a32a60b5532b322cc0ea64e639dd50e \
--hash=sha256:4c96f993ab8c98460cd0c001447bff6194403e8b1d7e149ade5f00594918128b \
--hash=sha256:504b6f59505f08ae014f724b6207ff6222662aab5cc9542577fb084ed0676ac7 \
--hash=sha256:5418b53c0d59b3824d05e029669efa023bbef0f3e92e75ec8428f3799487f361 \
--hash=sha256:59a03cdf019efbfeeed910bf79c7c93255c3d54bc45898ac2a4140071b02b4ae \
--hash=sha256:5e05688ccef30ea69b9317a9ead994b93975104a677a36a8ed8106be9260aa6d \
--hash=sha256:643f189248837533073c405ec2f0bb250ba54598cf80e8c1e043381a60632f58 \
--hash=sha256:67172f2944ebba3d4a7b54f2e95c786a3a50c21b88456329314caaa28cda70f6 \
--hash=sha256:676b2815362456b5b3216b4fd5bd89d362100dc6f4945154ff172e206a22c024 \
--hash=sha256:6be31e3fc9a621e071bc17bb7de63b85cbe0bfae91bb0363c893cbe67247780d \
--hash=sha256:7859a4cc7c9295f5838015d8cc0a9c215b77e43d07a25e460f35cf516df8626f \
--hash=sha256:7966e38dcd0fa11ca390aed7c6f20454443581d758242023cf36fcb319b1a874 \
--hash=sha256:79ea0d14d3ebad43ec77ad5272e6ff9bba5b679ef73375ea760261207fa8e0aa \
--hash=sha256:7b161756381f0918e05e7cb8a371fff367e807770f8fe92ecb20d905d0e1c149 \
--hash=sha256:7c8ec7a017ad1bd562f93dbd8505763e688d388cde6e4a010ae1486916e713e6 \
--hash=sha256:7d1aa4de119a0ecac0a34a9c8bde33f34022e2e8f99104e47a3ca392fd60e37d \
--hash=sha256:7db51d222548ccfd274e4572fdbf3e810a5e66b00608862f947b163e613b67dd \
--hash=sha256:83e1b0161c9d148125083a35c1c5a89db5b7054834fd4387499e06552035236c \
--hash=sha256:857844335c95bea93fb39e0fa2726b4d9d758850b34075a7e3ff4f4fa3aa3b31 \
--hash=sha256:8797edc41f3e8536ae4b10897ee2f637235c94f27404cac7297f7b607dd0716e \
--hash=sha256:8924748b688aa210d79883357d102cd64690e56b923a186f35a82cbc10f997db \
--hash=sha256:91da1d88226663594e3f6b4b8c3c8d85bd504117d043740a8e0ec449087cc494 \
--hash=sha256:921bd305b10e82b4d1f5e802b6850677f965d8394203d182f078873851dada69 \
--hash=sha256:932c754c2d51ad2b2271fd01c3d121daaa35e27efae2a616f77bf164bc0b3e94 \
--hash=sha256:93efb0b4de7e340d99057415c749175e24c8864302369e05914682ba642e5d77 \
--hash=sha256:97f07ed9f56a3b9b5f49d3661dc9607484e85c67e27f3e8be2c7d28ca032fec7 \
--hash=sha256:98a9afa7b9007c67ed84c57c9e0ad86a6000da96eaa638e4f8abe5b65ff83f0a \
--hash=sha256:9ab6ae226de48019caa8074894544af5b53a117ccb9d3b3dcb2871464c829438 \
--hash=sha256:9c412fddd1b77a75aa904615ebaa6001f169b26fd467b4be93aded278266b288 \
--hash=sha256:a1bc6ba083b145187f648b667e05a2534ecc4b9f2784c2cbe3089e44868f2b9b \
--hash=sha256:a418486160228f64dd9e9efcd132679b7a02a5f22c982c78b6fc7dab3fefb635 \
--hash=sha256:a4d336baed65d50d37b88ca5b60c0fa9d81e3a87d4a7930d3880d1624d5b31f3 \
--hash=sha256:a6444696fce635783440b7f7a9fc24b3ad10a9ea3f0ab66c5905be1c19ccf17d \
--hash=sha256:a7bc6e6fd0395bc052f16b1a8670859964dbd7003bd0af2ff08342eb6e442cfe \
--hash=sha256:b4b8f3efc8d530a1544e5962bd6b403d5f7fe8b9e08227c6b255f98ad82b4ba0 \
--hash=sha256:c37d8ba9411d6003bba9e518db0db0c58a680ab9fe5179f040b0463644bc9805 \
--hash=sha256:c84d689db21a1c397d001aa08241044aa2069e7587b398c8cc63020390b1c1b8 \
--hash=sha256:c96d333dcf42d01f47b37e0979b6bd73ec91eae18614864622d9b87bbd5bbf36 \
--hash=sha256:cd8ff254faf15591e724dc7c4ddb6bf4793efcbe13802a4ae3e863cd300b493e \
--hash=sha256:d9da3df5f9ea2a89b81bb6087177fb1f4d1c7146d583a3fe5c672c0d94e55e12 \
--hash=sha256:eb76541cba2f958032d79d143b98a3a6b3ea87f0959bbe256c0b5e416599fd5d \
--hash=sha256:ec1ee50470b0d050984394423d96325b744d55c701a439d2bd66089bff963d3c \
--hash=sha256:ee92f2fd10f4adc4b43d07ec5e779932b4eb3dbfbc34790ada5a6669bc095aa6 \
--hash=sha256:f0f5d8f4a08090c6d6d578351a2b91acf519a54986c055af27e7a93feae6d3f1 \
--hash=sha256:f8a5827f84d973d8636e9dc5764af4f0cf2318d26744b3d902931701b0d46653 \
--hash=sha256:f944255db153ebb2b19c51fe85dd99ef0ce494123f21b9db4877ffdfc5590c7c \
--hash=sha256:fdae223722da47b024b867c1ea0be64e0df702c5e0a60e27daad39bf960dd1e4 \
--hash=sha256:fe27fb049cdcca11f11a7bfda64043c37b30e6b91f10cb5bab275806c32f6ab3
# via
# imageio
# matplotlib
# usda-vision-cameras
pydantic==2.11.7 \
--hash=sha256:d989c3c6cb79469287b1569f7447a17848c998458d49ebe294e975b9baf0f0db \
--hash=sha256:dde5df002701f6de26248661f6835bbe296a47bf73990135c7d07ce741b9623b
# via fastapi
pydantic-core==2.33.2 \
--hash=sha256:04a1a413977ab517154eebb2d326da71638271477d6ad87a769102f7c2488c56 \
--hash=sha256:0a9f2c9dd19656823cb8250b0724ee9c60a82f3cdf68a080979d13092a3b0fef \
--hash=sha256:0fb2d542b4d66f9470e8065c5469ec676978d625a8b7a363f07d9a501a9cb36a \
--hash=sha256:1082dd3e2d7109ad8b7da48e1d4710c8d06c253cbc4a27c1cff4fbcaa97a9e3f \
--hash=sha256:1e063337ef9e9820c77acc768546325ebe04ee38b08703244c1309cccc4f1bab \
--hash=sha256:1ea40a64d23faa25e62a70ad163571c0b342b8bf66d5fa612ac0dec4f069d916 \
--hash=sha256:235f45e5dbcccf6bd99f9f472858849f73d11120d76ea8707115415f8e5ebebf \
--hash=sha256:2b0a451c263b01acebe51895bfb0e1cc842a5c666efe06cdf13846c7418caa9a \
--hash=sha256:2bfb5112df54209d820d7bf9317c7a6c9025ea52e49f46b6a2060104bba37de7 \
--hash=sha256:2f82865531efd18d6e07a04a17331af02cb7a651583c418df8266f17a63c6612 \
--hash=sha256:329467cecfb529c925cf2bbd4d60d2c509bc2fb52a20c1045bf09bb70971a9c1 \
--hash=sha256:3c6db6e52c6d70aa0d00d45cdb9b40f0433b96380071ea80b09277dba021ddf7 \
--hash=sha256:3dc625f4aa79713512d1976fe9f0bc99f706a9dee21dfd1810b4bbbf228d0e8a \
--hash=sha256:4c5b0a576fb381edd6d27f0a85915c6daf2f8138dc5c267a57c08a62900758c7 \
--hash=sha256:4e61206137cbc65e6d5256e1166f88331d3b6238e082d9f74613b9b765fb9025 \
--hash=sha256:52fb90784e0a242bb96ec53f42196a17278855b0f31ac7c3cc6f5c1ec4811849 \
--hash=sha256:572c7e6c8bb4774d2ac88929e3d1f12bc45714ae5ee6d9a788a9fb35e60bb04b \
--hash=sha256:5c92edd15cd58b3c2d34873597a1e20f13094f59cf88068adb18947df5455b4e \
--hash=sha256:5f483cfb75ff703095c59e365360cb73e00185e01aaea067cd19acffd2ab20ea \
--hash=sha256:61c18fba8e5e9db3ab908620af374db0ac1baa69f0f32df4f61ae23f15e586ac \
--hash=sha256:6368900c2d3ef09b69cb0b913f9f8263b03786e5b2a387706c5afb66800efd51 \
--hash=sha256:64632ff9d614e5eecfb495796ad51b0ed98c453e447a76bcbeeb69615079fc7e \
--hash=sha256:65132b7b4a1c0beded5e057324b7e16e10910c106d43675d9bd87d4f38dde162 \
--hash=sha256:6b99022f1d19bc32a4c2a0d544fc9a76e3be90f0b3f4af413f87d38749300e65 \
--hash=sha256:73cf6373c21bc80b2e0dc88444f41ae60b2f070ed02095754eb5a01df12256de \
--hash=sha256:7cb8bc3605c29176e1b105350d2e6474142d7c1bd1d9327c4a9bdb46bf827acc \
--hash=sha256:82f68293f055f51b51ea42fafc74b6aad03e70e191799430b90c13d643059ebb \
--hash=sha256:881b21b5549499972441da4758d662aeea93f1923f953e9cbaff14b8b9565aef \
--hash=sha256:8f57a69461af2a5fa6e6bbd7a5f60d3b7e6cebb687f55106933188e79ad155c1 \
--hash=sha256:95237e53bb015f67b63c91af7518a62a8660376a6a0db19b89acc77a4d6199f5 \
--hash=sha256:96081f1605125ba0855dfda83f6f3df5ec90c61195421ba72223de35ccfb2f88 \
--hash=sha256:9cb1da0f5a471435a7bc7e439b8a728e8b61e59784b2af70d7c169f8dd8ae290 \
--hash=sha256:9fdac5d6ffa1b5a83bca06ffe7583f5576555e6c8b3a91fbd25ea7780f825f7d \
--hash=sha256:a144d4f717285c6d9234a66778059f33a89096dfb9b39117663fd8413d582dcc \
--hash=sha256:a7ec89dc587667f22b6a0b6579c249fca9026ce7c333fc142ba42411fa243cdc \
--hash=sha256:bc7aee6f634a6f4a95676fcb5d6559a2c2a390330098dba5e5a5f28a2e4ada30 \
--hash=sha256:bdc25f3681f7b78572699569514036afe3c243bc3059d3942624e936ec93450e \
--hash=sha256:c083a3bdd5a93dfe480f1125926afcdbf2917ae714bdb80b36d34318b2bec5d9 \
--hash=sha256:c2fc0a768ef76c15ab9238afa6da7f69895bb5d1ee83aeea2e3509af4472d0b9 \
--hash=sha256:c52b02ad8b4e2cf14ca7b3d918f3eb0ee91e63b3167c32591e57c4317e134f8f \
--hash=sha256:c8e7af2f4e0194c22b5b37205bfb293d166a7344a5b0d0eaccebc376546d77d5 \
--hash=sha256:cca3868ddfaccfbc4bfb1d608e2ccaaebe0ae628e1416aeb9c4d88c001bb45ab \
--hash=sha256:d87c561733f66531dced0da6e864f44ebf89a8fba55f31407b00c2f7f9449593 \
--hash=sha256:db4b41f9bd95fbe5acd76d89920336ba96f03e149097365afe1cb092fceb89a1 \
--hash=sha256:dc46a01bf8d62f227d5ecee74178ffc448ff4e5197c756331f71efcc66dc980f \
--hash=sha256:dd14041875d09cc0f9308e37a6f8b65f5585cf2598a53aa0123df8b129d481f8 \
--hash=sha256:de4b83bb311557e439b9e186f733f6c645b9417c84e2eb8203f3f820a4b988bf \
--hash=sha256:e799c050df38a639db758c617ec771fd8fb7a5f8eaaa4b27b101f266b216a246 \
--hash=sha256:e80b087132752f6b3d714f041ccf74403799d3b23a72722ea2e6ba2e892555b9 \
--hash=sha256:eb8c529b2819c37140eb51b914153063d27ed88e3bdc31b71198a198e921e011 \
--hash=sha256:f517ca031dfc037a9c07e748cefd8d96235088b83b4f4ba8939105d20fa1dcd6 \
--hash=sha256:f889f7a40498cc077332c7ab6b4608d296d852182211787d4f3ee377aaae66e8 \
--hash=sha256:f941635f2a3d96b2973e867144fde513665c87f13fe0e193c158ac51bfaaa7b2 \
--hash=sha256:fa854f5cf7e33842a892e5c73f45327760bc7bc516339fda888c75ae60edaeb6 \
--hash=sha256:fe5b32187cbc0c862ee201ad66c30cf218e5ed468ec8dc1cf49dec66e160cc4d
# via pydantic
pyparsing==3.2.3 \
--hash=sha256:a749938e02d6fd0b59b356ca504a24982314bb090c383e3cf201c95ef7e2bfcf \
--hash=sha256:b9c13f1ab8b3b542f72e28f634bad4de758ab3ce4546e4301970ad6fa77c38be
# via matplotlib
python-dateutil==2.9.0.post0 \
--hash=sha256:37dd54208da7e1cd875388217d5e00ebd4179249f90fb72437e91a35459a0ad3 \
--hash=sha256:a8b2bc7bffae282281c8140a97d3aa9c14da0b136dfe83f850eea9a5f7470427
# via matplotlib
pytz==2025.2 \
--hash=sha256:360b9e3dbb49a209c21ad61809c7fb453643e048b38924c765813546746e81c3 \
--hash=sha256:5ddf76296dd8c44c26eb8f4b6f35488f3ccbf6fbbd7adee0b7262d43f0ec2f00
# via usda-vision-cameras
requests==2.32.4 \
--hash=sha256:27babd3cda2a6d50b30443204ee89830707d396671944c998b5975b031ac2b2c \
--hash=sha256:27d0316682c8a29834d3264820024b62a36942083d52caf2f14c0591336d3422
# via usda-vision-cameras
six==1.17.0 \
--hash=sha256:4721f391ed90541fddacab5acf947aa0d3dc7d27b2e1e8eda2be8970586c3274 \
--hash=sha256:ff70335d468e7eb6ec65b95b99d3a2836546063f63acc5171de367e834932a81
# via python-dateutil
sniffio==1.3.1 \
--hash=sha256:2f6da418d1f1e0fddd844478f41680e794e6051915791a034ff65e5f100525a2 \
--hash=sha256:f4324edc670a0f49750a81b895f35c3adb843cca46f0530f79fc1babb23789dc
# via anyio
starlette==0.47.2 \
--hash=sha256:6ae9aa5db235e4846decc1e7b79c4f346adf41e9777aebeb49dfd09bbd7023d8 \
--hash=sha256:c5847e96134e5c5371ee9fac6fdf1a67336d5815e09eb2a01fdb57a351ef915b
# via fastapi
tqdm==4.67.1 \
--hash=sha256:26445eca388f82e72884e0d580d5464cd801a3ea01e63e5601bdff9ba6a48de2 \
--hash=sha256:f8aef9c52c08c13a65f30ea34f4e5aac3fd1a34959879d7e59e63027286627f2
# via usda-vision-cameras
typing-extensions==4.14.1 \
--hash=sha256:38b39f4aeeab64884ce9f74c94263ef78f3c22467c8724005483154c26648d36 \
--hash=sha256:d1e1e3b58374dc93031d6eda2420a48ea44a36c2b4766a4fdeb3710755731d76
# via
# anyio
# fastapi
# pydantic
# pydantic-core
# starlette
# typing-inspection
typing-inspection==0.4.1 \
--hash=sha256:389055682238f53b04f7badcb49b989835495a96700ced5dab2d8feae4b26f51 \
--hash=sha256:6ae134cc0203c33377d43188d4064e9b357dba58cff3185f22924610e70a9d28
# via pydantic
urllib3==2.5.0 \
--hash=sha256:3fc47733c7e419d4bc3f6b3dc2b4f890bb743906a30d56ba4a5bfa4bbff92760 \
--hash=sha256:e6b01673c0fa6a13e374b50871808eb3bf7046c4b125b216f6bf1cc604cff0dc
# via requests
uvicorn==0.35.0 \
--hash=sha256:197535216b25ff9b785e29a0b79199f55222193d47f820816e7da751e9bc8d4a \
--hash=sha256:bc662f087f7cf2ce11a1d7fd70b90c9f98ef2e2831556dd078d131b96cc94a01
# via usda-vision-cameras
websockets==15.0.1 \
--hash=sha256:0701bc3cfcb9164d04a14b149fd74be7347a530ad3bbf15ab2c678a2cd3dd9a2 \
--hash=sha256:0af68c55afbd5f07986df82831c7bff04846928ea8d1fd7f30052638788bc9b5 \
--hash=sha256:0f3c1e2ab208db911594ae5b4f79addeb3501604a165019dd221c0bdcabe4db8 \
--hash=sha256:16b6c1b3e57799b9d38427dda63edcbe4926352c47cf88588c0be4ace18dac85 \
--hash=sha256:229cf1d3ca6c1804400b0a9790dc66528e08a6a1feec0d5040e8b9eb14422375 \
--hash=sha256:27ccee0071a0e75d22cb35849b1db43f2ecd3e161041ac1ee9d2352ddf72f065 \
--hash=sha256:3be571a8b5afed347da347bfcf27ba12b069d9d7f42cb8c7028b5e98bbb12597 \
--hash=sha256:3c714d2fc58b5ca3e285461a4cc0c9a66bd0e24c5da9911e30158286c9b5be7f \
--hash=sha256:3e90baa811a5d73f3ca0bcbf32064d663ed81318ab225ee4f427ad4e26e5aff3 \
--hash=sha256:54479983bd5fb469c38f2f5c7e3a24f9a4e70594cd68cd1fa6b9340dadaff7cf \
--hash=sha256:558d023b3df0bffe50a04e710bc87742de35060580a293c2a984299ed83bc4e4 \
--hash=sha256:592f1a9fe869c778694f0aa806ba0374e97648ab57936f092fd9d87f8bc03665 \
--hash=sha256:595b6c3969023ecf9041b2936ac3827e4623bfa3ccf007575f04c5a6aa318c22 \
--hash=sha256:5a939de6b7b4e18ca683218320fc67ea886038265fd1ed30173f5ce3f8e85675 \
--hash=sha256:5d54b09eba2bada6011aea5375542a157637b91029687eb4fdb2dab11059c1b4 \
--hash=sha256:64dee438fed052b52e4f98f76c5790513235efaa1ef7f3f2192c392cd7c91b65 \
--hash=sha256:66dd88c918e3287efc22409d426c8f729688d89a0c587c88971a0faa2c2f3792 \
--hash=sha256:678999709e68425ae2593acf2e3ebcbcf2e69885a5ee78f9eb80e6e371f1bf57 \
--hash=sha256:693f0192126df6c2327cce3baa7c06f2a117575e32ab2308f7f8216c29d9e2e3 \
--hash=sha256:746ee8dba912cd6fc889a8147168991d50ed70447bf18bcda7039f7d2e3d9151 \
--hash=sha256:756c56e867a90fb00177d530dca4b097dd753cde348448a1012ed6c5131f8b7d \
--hash=sha256:823c248b690b2fd9303ba00c4f66cd5e2d8c3ba4aa968b2779be9532a4dad431 \
--hash=sha256:82544de02076bafba038ce055ee6412d68da13ab47f0c60cab827346de828dee \
--hash=sha256:8dd8327c795b3e3f219760fa603dcae1dcc148172290a8ab15158cf85a953413 \
--hash=sha256:8fdc51055e6ff4adeb88d58a11042ec9a5eae317a0a53d12c062c8a8865909e8 \
--hash=sha256:ba9e56e8ceeeedb2e080147ba85ffcd5cd0711b89576b83784d8605a7df455fa \
--hash=sha256:c338ffa0520bdb12fbc527265235639fb76e7bc7faafbb93f6ba80d9c06578a9 \
--hash=sha256:d50fd1ee42388dcfb2b3676132c78116490976f1300da28eb629272d5d93e905 \
--hash=sha256:d5f6b181bb38171a8ad1d6aa58a67a6aa9d4b38d0f8c5f496b9e42561dfc62fe \
--hash=sha256:d99e5546bf73dbad5bf3547174cd6cb8ba7273062a23808ffea025ecb1cf8562 \
--hash=sha256:e09473f095a819042ecb2ab9465aee615bd9c2028e4ef7d933600a8401c79561 \
--hash=sha256:e8b56bdcdb4505c8078cb6c7157d9811a85790f2f2b3632c7d1462ab5783d215 \
--hash=sha256:ee443ef070bb3b6ed74514f5efaa37a252af57c90eb33b956d35c8e9c10a1931 \
--hash=sha256:f7a866fbc1e97b5c617ee4116daaa09b722101d4a3c170c787450ba409f9736f \
--hash=sha256:fcd5cf9e305d7b8338754470cf69cf81f420459dbae8a3b40cee57417f4614a7
# via usda-vision-cameras

36
api/run_auto_recorder.py Normal file
View File

@@ -0,0 +1,36 @@
#!/usr/bin/env python3
"""
Service script to run the standalone auto-recorder
Usage:
sudo python run_auto_recorder.py
"""
import sys
import os
from pathlib import Path
# Add the project root to the path
project_root = Path(__file__).parent
sys.path.insert(0, str(project_root))
from usda_vision_system.recording.standalone_auto_recorder import StandaloneAutoRecorder
def main():
"""Main entry point"""
print("🚀 Starting USDA Vision Auto-Recorder Service")
# Check if running as root
if os.geteuid() != 0:
print("❌ This script must be run as root (use sudo)")
print(" sudo python run_auto_recorder.py")
sys.exit(1)
# Create and run auto-recorder
recorder = StandaloneAutoRecorder()
recorder.run()
if __name__ == "__main__":
main()

61
api/setup_service.sh Executable file
View File

@@ -0,0 +1,61 @@
#!/bin/bash
# USDA Vision Camera System Service Setup Script
echo "USDA Vision Camera System - Service Setup"
echo "========================================"
# Check if running as root
if [ "$EUID" -ne 0 ]; then
echo "❌ This script must be run as root (use sudo)"
exit 1
fi
# Get the current directory (where the script is located)
SCRIPT_DIR="$(cd "$(dirname "${BASH_SOURCE[0]}")" && pwd)"
SERVICE_FILE="$SCRIPT_DIR/usda-vision-camera.service"
echo "📁 Working directory: $SCRIPT_DIR"
# Check if service file exists
if [ ! -f "$SERVICE_FILE" ]; then
echo "❌ Service file not found: $SERVICE_FILE"
exit 1
fi
# Make start_system.sh executable
echo "🔧 Making start_system.sh executable..."
chmod +x "$SCRIPT_DIR/start_system.sh"
# Update the service file with the correct path
echo "📝 Updating service file with correct paths..."
sed -i "s|WorkingDirectory=.*|WorkingDirectory=$SCRIPT_DIR|g" "$SERVICE_FILE"
sed -i "s|ExecStart=.*|ExecStart=/bin/bash $SCRIPT_DIR/start_system.sh|g" "$SERVICE_FILE"
# Copy service file to systemd directory
echo "📋 Installing service file..."
cp "$SERVICE_FILE" /etc/systemd/system/
# Reload systemd daemon
echo "🔄 Reloading systemd daemon..."
systemctl daemon-reload
# Enable the service
echo "✅ Enabling USDA Vision Camera service..."
systemctl enable usda-vision-camera.service
# Check service status
echo "📊 Service status:"
systemctl status usda-vision-camera.service --no-pager
echo ""
echo "🎉 Service setup complete!"
echo ""
echo "Available commands:"
echo " sudo systemctl start usda-vision-camera # Start the service"
echo " sudo systemctl stop usda-vision-camera # Stop the service"
echo " sudo systemctl restart usda-vision-camera # Restart the service"
echo " sudo systemctl status usda-vision-camera # Check service status"
echo " sudo journalctl -u usda-vision-camera -f # View live logs"
echo ""
echo "The service will automatically start when the container/system boots."

289
api/setup_timezone.sh Executable file
View File

@@ -0,0 +1,289 @@
#!/bin/bash
# Time Synchronization Setup for USDA Vision Camera System
# Location: Atlanta, Georgia (Eastern Time Zone)
echo "🕐 Setting up time synchronization for Atlanta, Georgia..."
echo "=================================================="
# Check if running as root
if [ "$EUID" -eq 0 ]; then
echo "Running as root - can make system changes"
CAN_SUDO=true
else
echo "Running as user - will use sudo for system changes"
CAN_SUDO=false
fi
# Function to run commands with appropriate privileges
run_cmd() {
if [ "$CAN_SUDO" = true ]; then
"$@"
else
sudo "$@"
fi
}
# 1. Set timezone to Eastern Time (Atlanta, Georgia)
echo "📍 Setting timezone to America/New_York (Eastern Time)..."
if run_cmd timedatectl set-timezone America/New_York; then
echo "✅ Timezone set successfully"
else
echo "❌ Failed to set timezone - trying alternative method..."
if run_cmd ln -sf /usr/share/zoneinfo/America/New_York /etc/localtime; then
echo "✅ Timezone set using alternative method"
else
echo "❌ Failed to set timezone"
fi
fi
# 2. Install and configure NTP for time synchronization
echo ""
echo "🔄 Setting up NTP time synchronization..."
# Check if systemd-timesyncd is available (modern systems)
if systemctl is-available systemd-timesyncd >/dev/null 2>&1; then
echo "Using systemd-timesyncd for time synchronization..."
# Enable and start systemd-timesyncd
run_cmd systemctl enable systemd-timesyncd
run_cmd systemctl start systemd-timesyncd
# Configure NTP servers (US-based servers for better accuracy)
echo "Configuring NTP servers..."
cat << EOF | run_cmd tee /etc/systemd/timesyncd.conf
[Time]
NTP=time.nist.gov pool.ntp.org time.google.com
FallbackNTP=time.cloudflare.com time.windows.com
RootDistanceMaxSec=5
PollIntervalMinSec=32
PollIntervalMaxSec=2048
EOF
# Restart timesyncd to apply new configuration
run_cmd systemctl restart systemd-timesyncd
echo "✅ systemd-timesyncd configured and started"
elif command -v ntpd >/dev/null 2>&1; then
echo "Using ntpd for time synchronization..."
# Install ntp if not present
if ! command -v ntpd >/dev/null 2>&1; then
echo "Installing ntp package..."
if command -v apt-get >/dev/null 2>&1; then
run_cmd apt-get update && run_cmd apt-get install -y ntp
elif command -v yum >/dev/null 2>&1; then
run_cmd yum install -y ntp
elif command -v dnf >/dev/null 2>&1; then
run_cmd dnf install -y ntp
fi
fi
# Configure NTP servers
cat << EOF | run_cmd tee /etc/ntp.conf
# NTP configuration for Atlanta, Georgia
driftfile /var/lib/ntp/ntp.drift
# US-based NTP servers for better accuracy
server time.nist.gov iburst
server pool.ntp.org iburst
server time.google.com iburst
server time.cloudflare.com iburst
# Fallback servers
server 0.us.pool.ntp.org iburst
server 1.us.pool.ntp.org iburst
server 2.us.pool.ntp.org iburst
server 3.us.pool.ntp.org iburst
# Security settings
restrict default kod notrap nomodify nopeer noquery
restrict -6 default kod notrap nomodify nopeer noquery
restrict 127.0.0.1
restrict -6 ::1
# Local clock as fallback
server 127.127.1.0
fudge 127.127.1.0 stratum 10
EOF
# Enable and start NTP service
run_cmd systemctl enable ntp
run_cmd systemctl start ntp
echo "✅ NTP configured and started"
else
echo "⚠️ No NTP service found - installing chrony as alternative..."
# Install chrony
if command -v apt-get >/dev/null 2>&1; then
run_cmd apt-get update && run_cmd apt-get install -y chrony
elif command -v yum >/dev/null 2>&1; then
run_cmd yum install -y chrony
elif command -v dnf >/dev/null 2>&1; then
run_cmd dnf install -y chrony
fi
# Configure chrony
cat << EOF | run_cmd tee /etc/chrony/chrony.conf
# Chrony configuration for Atlanta, Georgia
server time.nist.gov iburst
server pool.ntp.org iburst
server time.google.com iburst
server time.cloudflare.com iburst
# US pool servers
pool us.pool.ntp.org iburst
driftfile /var/lib/chrony/drift
makestep 1.0 3
rtcsync
EOF
# Enable and start chrony
run_cmd systemctl enable chrony
run_cmd systemctl start chrony
echo "✅ Chrony configured and started"
fi
# 3. Force immediate time synchronization
echo ""
echo "⏰ Forcing immediate time synchronization..."
if systemctl is-active systemd-timesyncd >/dev/null 2>&1; then
run_cmd systemctl restart systemd-timesyncd
sleep 2
run_cmd timedatectl set-ntp true
elif systemctl is-active ntp >/dev/null 2>&1; then
run_cmd ntpdate -s time.nist.gov
run_cmd systemctl restart ntp
elif systemctl is-active chrony >/dev/null 2>&1; then
run_cmd chrony sources -v
run_cmd chronyc makestep
fi
# 4. Configure hardware clock
echo ""
echo "🔧 Configuring hardware clock..."
if run_cmd hwclock --systohc; then
echo "✅ Hardware clock synchronized with system clock"
else
echo "⚠️ Could not sync hardware clock (this may be normal in containers)"
fi
# 5. Display current time information
echo ""
echo "📊 Current Time Information:"
echo "================================"
echo "System time: $(date)"
echo "UTC time: $(date -u)"
echo "Timezone: $(timedatectl show --property=Timezone --value 2>/dev/null || cat /etc/timezone 2>/dev/null || echo 'Unknown')"
# Check if timedatectl is available
if command -v timedatectl >/dev/null 2>&1; then
echo ""
echo "Time synchronization status:"
timedatectl status
fi
# 6. Create a time check script for the vision system
echo ""
echo "📝 Creating time verification script..."
cat << 'EOF' > check_time.py
#!/usr/bin/env python3
"""
Time verification script for USDA Vision Camera System
Checks if system time is properly synchronized
"""
import datetime
import pytz
import requests
import json
def check_system_time():
"""Check system time against multiple sources"""
print("🕐 USDA Vision Camera System - Time Verification")
print("=" * 50)
# Get local time
local_time = datetime.datetime.now()
utc_time = datetime.datetime.utcnow()
# Get Atlanta timezone
atlanta_tz = pytz.timezone('America/New_York')
atlanta_time = datetime.datetime.now(atlanta_tz)
print(f"Local system time: {local_time}")
print(f"UTC time: {utc_time}")
print(f"Atlanta time: {atlanta_time}")
print(f"Timezone: {atlanta_time.tzname()}")
# Check against world time API
try:
print("\n🌐 Checking against world time API...")
response = requests.get("http://worldtimeapi.org/api/timezone/America/New_York", timeout=5)
if response.status_code == 200:
data = response.json()
api_time = datetime.datetime.fromisoformat(data['datetime'].replace('Z', '+00:00'))
# Compare times (allow 5 second difference)
time_diff = abs((atlanta_time.replace(tzinfo=None) - api_time.replace(tzinfo=None)).total_seconds())
print(f"API time: {api_time}")
print(f"Time difference: {time_diff:.2f} seconds")
if time_diff < 5:
print("✅ Time is synchronized (within 5 seconds)")
return True
else:
print("❌ Time is NOT synchronized (difference > 5 seconds)")
return False
else:
print("⚠️ Could not reach time API")
return None
except Exception as e:
print(f"⚠️ Error checking time API: {e}")
return None
if __name__ == "__main__":
check_system_time()
EOF
chmod +x check_time.py
echo "✅ Time verification script created: check_time.py"
# 7. Add time sync check to the vision system startup
echo ""
echo "🔗 Integrating time sync with vision system..."
# Update the startup script to include time check
if [ -f "start_system.sh" ]; then
# Create backup
cp start_system.sh start_system.sh.backup
# Add time sync check to startup script
sed -i '/# Run system tests first/i\
# Check time synchronization\
echo "🕐 Checking time synchronization..."\
python check_time.py\
echo ""' start_system.sh
echo "✅ Updated start_system.sh to include time verification"
fi
echo ""
echo "🎉 Time synchronization setup complete!"
echo ""
echo "Summary:"
echo "- Timezone set to America/New_York (Eastern Time)"
echo "- NTP synchronization configured and enabled"
echo "- Time verification script created (check_time.py)"
echo "- Startup script updated to check time sync"
echo ""
echo "To verify time sync manually, run: python check_time.py"
echo "Current time: $(date)"

67
api/start_system.sh Executable file
View File

@@ -0,0 +1,67 @@
#!/bin/bash
# USDA Vision Camera System Startup Script
echo "USDA Vision Camera System - Startup Script"
echo "=========================================="
# Check if virtual environment exists
if [ ! -d ".venv" ]; then
echo "❌ Virtual environment not found. Please run 'uv sync' first."
exit 1
fi
# Activate virtual environment
echo "🔧 Activating virtual environment..."
source .venv/bin/activate
# Check if config file exists
if [ ! -f "config.json" ]; then
echo "⚠️ Config file not found. Using default configuration."
fi
# Check storage directory
if [ ! -d "/storage" ]; then
echo "📁 Creating storage directory..."
sudo mkdir -p /storage
sudo chown $USER:$USER /storage
echo "✅ Storage directory created at /storage"
fi
# Check time synchronization
echo "🕐 Checking time synchronization..."
python check_time.py
echo ""
# Run system tests first
echo "🧪 Running system tests..."
python test_system.py
if [ $? -ne 0 ]; then
echo "❌ System tests failed. Please check the configuration."
# When running as a service, don't prompt for user input
if [ -t 0 ]; then
# Interactive mode - prompt user
read -p "Do you want to continue anyway? (y/N): " -n 1 -r
echo
if [[ ! $REPLY =~ ^[Yy]$ ]]; then
exit 1
fi
else
# Non-interactive mode (service) - continue with warning
echo "⚠️ Running in non-interactive mode. Continuing despite test failures..."
sleep 2
fi
fi
echo ""
echo "🚀 Starting USDA Vision Camera System..."
echo " - MQTT monitoring will begin automatically"
echo " - Camera recording will start when machines turn on"
echo " - API server will be available at http://localhost:8000"
echo " - Press Ctrl+C to stop the system"
echo ""
# Start the system
python main.py "$@"
echo "👋 USDA Vision Camera System stopped."

55
api/start_system.sh.backup Executable file
View File

@@ -0,0 +1,55 @@
#!/bin/bash
# USDA Vision Camera System Startup Script
echo "USDA Vision Camera System - Startup Script"
echo "=========================================="
# Check if virtual environment exists
if [ ! -d ".venv" ]; then
echo "❌ Virtual environment not found. Please run 'uv sync' first."
exit 1
fi
# Activate virtual environment
echo "🔧 Activating virtual environment..."
source .venv/bin/activate
# Check if config file exists
if [ ! -f "config.json" ]; then
echo "⚠️ Config file not found. Using default configuration."
fi
# Check storage directory
if [ ! -d "/storage" ]; then
echo "📁 Creating storage directory..."
sudo mkdir -p /storage
sudo chown $USER:$USER /storage
echo "✅ Storage directory created at /storage"
fi
# Run system tests first
echo "🧪 Running system tests..."
python test_system.py
if [ $? -ne 0 ]; then
echo "❌ System tests failed. Please check the configuration."
read -p "Do you want to continue anyway? (y/N): " -n 1 -r
echo
if [[ ! $REPLY =~ ^[Yy]$ ]]; then
exit 1
fi
fi
echo ""
echo "🚀 Starting USDA Vision Camera System..."
echo " - MQTT monitoring will begin automatically"
echo " - Camera recording will start when machines turn on"
echo " - API server will be available at http://localhost:8000"
echo " - Press Ctrl+C to stop the system"
echo ""
# Start the system
python main.py "$@"
echo "👋 USDA Vision Camera System stopped."

View File

@@ -0,0 +1,87 @@
#!/usr/bin/env python3
"""
Test script for the standalone auto-recorder
This script tests the standalone auto-recording functionality by:
1. Starting the auto-recorder
2. Simulating MQTT messages
3. Checking if recordings start/stop correctly
"""
import time
import threading
import paho.mqtt.client as mqtt
from usda_vision_system.recording.standalone_auto_recorder import StandaloneAutoRecorder
def test_mqtt_publisher():
"""Test function that publishes MQTT messages to simulate machine state changes"""
# Wait for auto-recorder to start
time.sleep(3)
# Create MQTT client for testing
test_client = mqtt.Client()
test_client.connect("192.168.1.110", 1883, 60)
print("\n🔄 Testing auto-recording with MQTT messages...")
# Test 1: Turn on vibratory_conveyor (should start camera2 recording)
print("\n📡 Test 1: Turning ON vibratory_conveyor (should start camera2)")
test_client.publish("vision/vibratory_conveyor/state", "on")
time.sleep(3)
# Test 2: Turn on blower_separator (should start camera1 recording)
print("\n📡 Test 2: Turning ON blower_separator (should start camera1)")
test_client.publish("vision/blower_separator/state", "on")
time.sleep(3)
# Test 3: Turn off vibratory_conveyor (should stop camera2 recording)
print("\n📡 Test 3: Turning OFF vibratory_conveyor (should stop camera2)")
test_client.publish("vision/vibratory_conveyor/state", "off")
time.sleep(3)
# Test 4: Turn off blower_separator (should stop camera1 recording)
print("\n📡 Test 4: Turning OFF blower_separator (should stop camera1)")
test_client.publish("vision/blower_separator/state", "off")
time.sleep(3)
print("\n✅ Test completed!")
test_client.disconnect()
def main():
"""Main test function"""
print("🚀 Starting Standalone Auto-Recorder Test")
# Create auto-recorder
recorder = StandaloneAutoRecorder()
# Start test publisher in background
test_thread = threading.Thread(target=test_mqtt_publisher, daemon=True)
test_thread.start()
# Run auto-recorder for 30 seconds
try:
if recorder.start():
print("✅ Auto-recorder started successfully")
# Run for 30 seconds
for i in range(30):
time.sleep(1)
if i % 5 == 0:
print(f"⏱️ Running... {30-i} seconds remaining")
else:
print("❌ Failed to start auto-recorder")
except KeyboardInterrupt:
print("\n⏹️ Test interrupted by user")
finally:
recorder.stop()
print("🏁 Test completed")
if __name__ == "__main__":
main()

View File

@@ -0,0 +1,274 @@
<!DOCTYPE html>
<html lang="en">
<head>
<meta charset="UTF-8">
<meta name="viewport" content="width=device-width, initial-scale=1.0">
<title>USDA Vision Camera - Video Streaming Test</title>
<style>
body {
font-family: Arial, sans-serif;
max-width: 1200px;
margin: 0 auto;
padding: 20px;
background-color: #f5f5f5;
}
.container {
background: white;
padding: 20px;
border-radius: 8px;
box-shadow: 0 2px 4px rgba(0,0,0,0.1);
}
h1 {
color: #2c3e50;
text-align: center;
}
.video-container {
margin: 20px 0;
text-align: center;
}
video {
width: 100%;
max-width: 800px;
height: auto;
border: 2px solid #3498db;
border-radius: 4px;
}
.video-info {
background: #ecf0f1;
padding: 10px;
margin: 10px 0;
border-radius: 4px;
font-family: monospace;
font-size: 14px;
}
.test-results {
margin: 20px 0;
padding: 15px;
background: #e8f5e8;
border-left: 4px solid #27ae60;
border-radius: 4px;
}
.error {
background: #fdf2f2;
border-left-color: #e74c3c;
}
button {
background: #3498db;
color: white;
border: none;
padding: 10px 20px;
border-radius: 4px;
cursor: pointer;
margin: 5px;
}
button:hover {
background: #2980b9;
}
.video-list {
margin: 20px 0;
}
.video-item {
background: #f8f9fa;
padding: 10px;
margin: 5px 0;
border-radius: 4px;
cursor: pointer;
border: 1px solid #dee2e6;
}
.video-item:hover {
background: #e9ecef;
}
.video-item.selected {
background: #d1ecf1;
border-color: #3498db;
}
</style>
</head>
<body>
<div class="container">
<h1>🎥 USDA Vision Camera System - Video Streaming Test</h1>
<div class="test-results" id="testResults">
<strong>✅ Streaming Implementation Updated!</strong><br>
• Fixed progressive streaming with chunked responses<br>
• Added HTTP range request support (206 Partial Content)<br>
• Videos should now play in web browsers instead of downloading<br>
</div>
<div class="video-container">
<video id="videoPlayer" controls preload="metadata">
<source id="videoSource" src="" type="video/mp4">
Your browser does not support the video tag.
</video>
<div class="video-info" id="videoInfo">
<strong>Current Video:</strong> None selected<br>
<strong>Streaming URL:</strong> -<br>
<strong>Status:</strong> Ready to test
</div>
</div>
<div>
<button onclick="loadVideoList()">🔄 Load Available Videos</button>
<button onclick="testCurrentVideo()">🧪 Test Current Video</button>
<button onclick="checkStreamingHeaders()">📊 Check Streaming Headers</button>
</div>
<div class="video-list" id="videoList">
<p>Click "Load Available Videos" to see available MP4 files...</p>
</div>
<div id="debugInfo" class="video-info" style="display: none;">
<strong>Debug Information:</strong><br>
<div id="debugContent"></div>
</div>
</div>
<script>
const API_BASE = 'http://localhost:8000';
let currentVideoId = null;
let availableVideos = [];
async function loadVideoList() {
try {
const response = await fetch(`${API_BASE}/videos/?limit=20`);
const data = await response.json();
availableVideos = data.videos.filter(v => v.format === 'mp4' && v.is_streamable);
const videoListDiv = document.getElementById('videoList');
if (availableVideos.length === 0) {
videoListDiv.innerHTML = '<p>❌ No streamable MP4 videos found</p>';
return;
}
videoListDiv.innerHTML = `
<h3>📹 Available MP4 Videos (${availableVideos.length})</h3>
${availableVideos.map(video => `
<div class="video-item" onclick="selectVideo('${video.file_id}')">
<strong>${video.file_id}</strong><br>
<small>Camera: ${video.camera_name} | Size: ${(video.file_size_bytes / 1024 / 1024).toFixed(1)} MB | Status: ${video.status}</small>
</div>
`).join('')}
`;
// Auto-select first video
if (availableVideos.length > 0) {
selectVideo(availableVideos[0].file_id);
}
} catch (error) {
document.getElementById('videoList').innerHTML = `<p class="error">❌ Error loading videos: ${error.message}</p>`;
}
}
function selectVideo(fileId) {
currentVideoId = fileId;
const video = availableVideos.find(v => v.file_id === fileId);
// Update UI
document.querySelectorAll('.video-item').forEach(item => item.classList.remove('selected'));
event.target.closest('.video-item').classList.add('selected');
// Update video player
const streamingUrl = `${API_BASE}/videos/${fileId}/stream`;
document.getElementById('videoSource').src = streamingUrl;
document.getElementById('videoPlayer').load();
// Update info
document.getElementById('videoInfo').innerHTML = `
<strong>Current Video:</strong> ${fileId}<br>
<strong>Camera:</strong> ${video.camera_name}<br>
<strong>Size:</strong> ${(video.file_size_bytes / 1024 / 1024).toFixed(1)} MB<br>
<strong>Streaming URL:</strong> ${streamingUrl}<br>
<strong>Status:</strong> Ready to play
`;
}
async function testCurrentVideo() {
if (!currentVideoId) {
alert('Please select a video first');
return;
}
const streamingUrl = `${API_BASE}/videos/${currentVideoId}/stream`;
const debugDiv = document.getElementById('debugInfo');
const debugContent = document.getElementById('debugContent');
try {
// Test range request
const rangeResponse = await fetch(streamingUrl, {
headers: { 'Range': 'bytes=0-1023' }
});
// Test full request (with timeout)
const controller = new AbortController();
setTimeout(() => controller.abort(), 2000);
const fullResponse = await fetch(streamingUrl, {
signal: controller.signal
}).catch(e => ({ status: 'timeout', headers: new Map() }));
debugContent.innerHTML = `
<strong>Range Request Test (bytes=0-1023):</strong><br>
Status: ${rangeResponse.status} ${rangeResponse.statusText}<br>
Content-Type: ${rangeResponse.headers.get('content-type')}<br>
Content-Length: ${rangeResponse.headers.get('content-length')}<br>
Content-Range: ${rangeResponse.headers.get('content-range')}<br>
Accept-Ranges: ${rangeResponse.headers.get('accept-ranges')}<br><br>
<strong>Full Request Test:</strong><br>
Status: ${fullResponse.status || 'timeout (expected)'}<br>
Content-Type: ${fullResponse.headers?.get('content-type') || 'N/A'}<br>
<br><strong>Expected Results:</strong><br>
✅ Range request: 206 Partial Content<br>
✅ Content-Length: 1024<br>
✅ Content-Range: bytes 0-1023/[file_size]<br>
✅ Accept-Ranges: bytes<br>
✅ Full request: 200 OK (or timeout)
`;
debugDiv.style.display = 'block';
} catch (error) {
debugContent.innerHTML = `❌ Error testing video: ${error.message}`;
debugDiv.style.display = 'block';
}
}
async function checkStreamingHeaders() {
if (!currentVideoId) {
alert('Please select a video first');
return;
}
const streamingUrl = `${API_BASE}/videos/${currentVideoId}/stream`;
try {
const response = await fetch(streamingUrl, {
method: 'HEAD'
}).catch(async () => {
// If HEAD fails, try GET with range
return await fetch(streamingUrl, {
headers: { 'Range': 'bytes=0-0' }
});
});
const headers = {};
response.headers.forEach((value, key) => {
headers[key] = value;
});
alert(`Streaming Headers:\n${JSON.stringify(headers, null, 2)}`);
} catch (error) {
alert(`Error checking headers: ${error.message}`);
}
}
// Auto-load videos when page loads
window.addEventListener('load', loadVideoList);
</script>
</body>
</html>

View File

@@ -0,0 +1,173 @@
#!/usr/bin/env python3
"""
Test script to verify the API changes for camera settings and filename handling.
"""
import requests
import json
import time
from datetime import datetime
# API base URL
BASE_URL = "http://localhost:8000"
def test_api_endpoint(endpoint, method="GET", data=None):
"""Test an API endpoint and return the response"""
url = f"{BASE_URL}{endpoint}"
try:
if method == "GET":
response = requests.get(url)
elif method == "POST":
response = requests.post(url, json=data, headers={"Content-Type": "application/json"})
print(f"\n{method} {endpoint}")
print(f"Status: {response.status_code}")
if response.status_code == 200:
result = response.json()
print(f"Response: {json.dumps(result, indent=2)}")
return result
else:
print(f"Error: {response.text}")
return None
except requests.exceptions.ConnectionError:
print(f"Error: Could not connect to {url}")
print("Make sure the API server is running with: python main.py")
return None
except Exception as e:
print(f"Error: {e}")
return None
def test_camera_recording_with_settings():
"""Test camera recording with new settings parameters"""
print("=" * 60)
print("Testing Camera Recording API with New Settings")
print("=" * 60)
# Test 1: Basic recording without settings
print("\n1. Testing basic recording (no settings)")
basic_request = {
"camera_name": "camera1",
"filename": "test_basic.avi"
}
result = test_api_endpoint("/cameras/camera1/start-recording", "POST", basic_request)
if result and result.get("success"):
print("✅ Basic recording started successfully")
print(f" Filename: {result.get('filename')}")
# Stop recording
time.sleep(2)
test_api_endpoint("/cameras/camera1/stop-recording", "POST")
else:
print("❌ Basic recording failed")
# Test 2: Recording with camera settings
print("\n2. Testing recording with camera settings")
settings_request = {
"camera_name": "camera1",
"filename": "test_with_settings.avi",
"exposure_ms": 2.0,
"gain": 4.0,
"fps": 5.0
}
result = test_api_endpoint("/cameras/camera1/start-recording", "POST", settings_request)
if result and result.get("success"):
print("✅ Recording with settings started successfully")
print(f" Filename: {result.get('filename')}")
# Stop recording
time.sleep(2)
test_api_endpoint("/cameras/camera1/stop-recording", "POST")
else:
print("❌ Recording with settings failed")
# Test 3: Recording with only settings (no filename)
print("\n3. Testing recording with settings only (no filename)")
settings_only_request = {
"camera_name": "camera1",
"exposure_ms": 1.5,
"gain": 3.0,
"fps": 7.0
}
result = test_api_endpoint("/cameras/camera1/start-recording", "POST", settings_only_request)
if result and result.get("success"):
print("✅ Recording with settings only started successfully")
print(f" Filename: {result.get('filename')}")
# Stop recording
time.sleep(2)
test_api_endpoint("/cameras/camera1/stop-recording", "POST")
else:
print("❌ Recording with settings only failed")
# Test 4: Test filename datetime prefix
print("\n4. Testing filename datetime prefix")
timestamp_before = datetime.now().strftime("%Y%m%d_%H%M")
filename_test_request = {
"camera_name": "camera1",
"filename": "my_custom_name.avi"
}
result = test_api_endpoint("/cameras/camera1/start-recording", "POST", filename_test_request)
if result and result.get("success"):
returned_filename = result.get('filename', '')
print(f" Original filename: my_custom_name.avi")
print(f" Returned filename: {returned_filename}")
# Check if datetime prefix was added
if timestamp_before in returned_filename and "my_custom_name.avi" in returned_filename:
print("✅ Datetime prefix correctly added to filename")
else:
print("❌ Datetime prefix not properly added")
# Stop recording
time.sleep(2)
test_api_endpoint("/cameras/camera1/stop-recording", "POST")
else:
print("❌ Filename test failed")
def test_system_status():
"""Test basic system status to ensure API is working"""
print("\n" + "=" * 60)
print("Testing System Status")
print("=" * 60)
# Test system status
result = test_api_endpoint("/system/status")
if result:
print("✅ System status API working")
print(f" System started: {result.get('system_started')}")
print(f" MQTT connected: {result.get('mqtt_connected')}")
else:
print("❌ System status API failed")
# Test camera status
result = test_api_endpoint("/cameras")
if result:
print("✅ Camera status API working")
for camera_name, camera_info in result.items():
print(f" {camera_name}: {camera_info.get('status')}")
else:
print("❌ Camera status API failed")
if __name__ == "__main__":
print("USDA Vision Camera System - API Changes Test")
print("This script tests the new camera settings parameters and filename handling")
print("\nMake sure the system is running with: python main.py")
# Test system status first
test_system_status()
# Test camera recording with new features
test_camera_recording_with_settings()
print("\n" + "=" * 60)
print("Test completed!")
print("=" * 60)

View File

@@ -0,0 +1,92 @@
#!/usr/bin/env python3
"""
Test script for camera recovery API endpoints.
This script tests the new camera recovery functionality without requiring actual cameras.
"""
import requests
import json
import time
from typing import Dict, Any
# API base URL
BASE_URL = "http://localhost:8000"
def test_endpoint(method: str, endpoint: str, data: Dict[Any, Any] = None) -> Dict[Any, Any]:
"""Test an API endpoint and return the response"""
url = f"{BASE_URL}{endpoint}"
try:
if method.upper() == "GET":
response = requests.get(url, timeout=10)
elif method.upper() == "POST":
response = requests.post(url, json=data or {}, timeout=10)
else:
raise ValueError(f"Unsupported method: {method}")
print(f"\n{method} {endpoint}")
print(f"Status: {response.status_code}")
if response.headers.get('content-type', '').startswith('application/json'):
result = response.json()
print(f"Response: {json.dumps(result, indent=2)}")
return result
else:
print(f"Response: {response.text}")
return {"text": response.text}
except requests.exceptions.ConnectionError:
print(f"❌ Connection failed - API server not running at {BASE_URL}")
return {"error": "connection_failed"}
except requests.exceptions.Timeout:
print(f"❌ Request timeout")
return {"error": "timeout"}
except Exception as e:
print(f"❌ Error: {e}")
return {"error": str(e)}
def main():
"""Test camera recovery API endpoints"""
print("🔧 Testing Camera Recovery API Endpoints")
print("=" * 50)
# Test basic endpoints first
print("\n📋 BASIC API TESTS")
test_endpoint("GET", "/health")
test_endpoint("GET", "/cameras")
# Test camera recovery endpoints
print("\n🔧 CAMERA RECOVERY TESTS")
camera_names = ["camera1", "camera2"]
for camera_name in camera_names:
print(f"\n--- Testing {camera_name} ---")
# Test connection
test_endpoint("POST", f"/cameras/{camera_name}/test-connection")
# Test reconnect
test_endpoint("POST", f"/cameras/{camera_name}/reconnect")
# Test restart grab
test_endpoint("POST", f"/cameras/{camera_name}/restart-grab")
# Test reset timestamp
test_endpoint("POST", f"/cameras/{camera_name}/reset-timestamp")
# Test full reset
test_endpoint("POST", f"/cameras/{camera_name}/full-reset")
# Test reinitialize
test_endpoint("POST", f"/cameras/{camera_name}/reinitialize")
time.sleep(0.5) # Small delay between tests
print("\n✅ Camera recovery API tests completed!")
print("\nNote: Some operations may fail if cameras are not connected,")
print("but the API endpoints should respond with proper error messages.")
if __name__ == "__main__":
main()

View File

@@ -0,0 +1,168 @@
#!/usr/bin/env python3
"""
Test script for MQTT events API endpoint
This script tests the new MQTT events history functionality by:
1. Starting the system components
2. Simulating MQTT messages
3. Testing the API endpoint to retrieve events
"""
import asyncio
import time
import requests
import json
from datetime import datetime
# Test configuration
API_BASE_URL = "http://localhost:8000"
MQTT_EVENTS_ENDPOINT = f"{API_BASE_URL}/mqtt/events"
def test_api_endpoint():
"""Test the MQTT events API endpoint"""
print("🧪 Testing MQTT Events API Endpoint")
print("=" * 50)
try:
# Test basic endpoint
print("📡 Testing GET /mqtt/events (default limit=5)")
response = requests.get(MQTT_EVENTS_ENDPOINT)
if response.status_code == 200:
data = response.json()
print(f"✅ API Response successful")
print(f"📊 Total events: {data.get('total_events', 0)}")
print(f"📋 Events returned: {len(data.get('events', []))}")
if data.get('events'):
print(f"🕐 Last updated: {data.get('last_updated')}")
print("\n📝 Recent events:")
for i, event in enumerate(data['events'], 1):
timestamp = datetime.fromisoformat(event['timestamp']).strftime('%H:%M:%S')
print(f" {i}. [{timestamp}] {event['machine_name']}: {event['payload']} -> {event['normalized_state']}")
else:
print("📭 No events found")
else:
print(f"❌ API Error: {response.status_code}")
print(f" Response: {response.text}")
except requests.exceptions.ConnectionError:
print("❌ Connection Error: API server not running")
print(" Start the system first: python -m usda_vision_system.main")
except Exception as e:
print(f"❌ Error: {e}")
print()
# Test with custom limit
try:
print("📡 Testing GET /mqtt/events?limit=10")
response = requests.get(f"{MQTT_EVENTS_ENDPOINT}?limit=10")
if response.status_code == 200:
data = response.json()
print(f"✅ API Response successful")
print(f"📋 Events returned: {len(data.get('events', []))}")
else:
print(f"❌ API Error: {response.status_code}")
except Exception as e:
print(f"❌ Error: {e}")
def test_system_status():
"""Test system status to verify API is running"""
print("🔍 Checking System Status")
print("=" * 50)
try:
response = requests.get(f"{API_BASE_URL}/system/status")
if response.status_code == 200:
data = response.json()
print(f"✅ System Status: {'Running' if data.get('system_started') else 'Not Started'}")
print(f"🔗 MQTT Connected: {'Yes' if data.get('mqtt_connected') else 'No'}")
print(f"📡 Last MQTT Message: {data.get('last_mqtt_message', 'None')}")
print(f"⏱️ Uptime: {data.get('uptime_seconds', 0):.1f} seconds")
return True
else:
print(f"❌ System Status Error: {response.status_code}")
return False
except requests.exceptions.ConnectionError:
print("❌ Connection Error: API server not running")
print(" Start the system first: python -m usda_vision_system.main")
return False
except Exception as e:
print(f"❌ Error: {e}")
return False
def test_mqtt_status():
"""Test MQTT status"""
print("📡 Checking MQTT Status")
print("=" * 50)
try:
response = requests.get(f"{API_BASE_URL}/mqtt/status")
if response.status_code == 200:
data = response.json()
print(f"🔗 MQTT Connected: {'Yes' if data.get('connected') else 'No'}")
print(f"🏠 Broker: {data.get('broker_host')}:{data.get('broker_port')}")
print(f"📋 Subscribed Topics: {len(data.get('subscribed_topics', []))}")
print(f"📊 Message Count: {data.get('message_count', 0)}")
print(f"❌ Error Count: {data.get('error_count', 0)}")
if data.get('subscribed_topics'):
print("📍 Topics:")
for topic in data['subscribed_topics']:
print(f" - {topic}")
return True
else:
print(f"❌ MQTT Status Error: {response.status_code}")
return False
except Exception as e:
print(f"❌ Error: {e}")
return False
def main():
"""Main test function"""
print("🧪 MQTT Events API Test")
print("=" * 60)
print(f"🎯 API Base URL: {API_BASE_URL}")
print(f"📡 Events Endpoint: {MQTT_EVENTS_ENDPOINT}")
print()
# Test system status first
if not test_system_status():
print("\n❌ System not running. Please start the system first:")
print(" python -m usda_vision_system.main")
return
print()
# Test MQTT status
if not test_mqtt_status():
print("\n❌ MQTT not available")
return
print()
# Test the events API
test_api_endpoint()
print("\n" + "=" * 60)
print("🎯 Test Instructions:")
print("1. Make sure the system is running")
print("2. Turn machines on/off to generate MQTT events")
print("3. Run this test again to see the events")
print("4. Check the admin dashboard to see events displayed")
print()
print("📋 API Usage:")
print(f" GET {MQTT_EVENTS_ENDPOINT}")
print(f" GET {MQTT_EVENTS_ENDPOINT}?limit=10")
if __name__ == "__main__":
main()

View File

@@ -0,0 +1,80 @@
#!/usr/bin/env python3
"""
Test script to verify the frame conversion fix works correctly.
"""
import sys
import os
import numpy as np
# Add the current directory to Python path
sys.path.insert(0, os.path.dirname(os.path.abspath(__file__)))
# Add camera SDK to path
sys.path.append(os.path.join(os.path.dirname(__file__), "camera_sdk"))
try:
import mvsdk
print("✅ mvsdk imported successfully")
except ImportError as e:
print(f"❌ Failed to import mvsdk: {e}")
sys.exit(1)
def test_frame_conversion():
"""Test the frame conversion logic"""
print("🧪 Testing frame conversion logic...")
# Simulate frame data
width, height = 640, 480
frame_size = width * height * 3 # RGB
# Create mock frame data
mock_frame_data = np.random.randint(0, 255, frame_size, dtype=np.uint8)
# Create a mock frame buffer (simulate memory address)
frame_buffer = mock_frame_data.ctypes.data
# Create mock FrameHead
class MockFrameHead:
def __init__(self):
self.iWidth = width
self.iHeight = height
self.uBytes = frame_size
frame_head = MockFrameHead()
try:
# Test the conversion logic (similar to what's in streamer.py)
frame_data_buffer = (mvsdk.c_ubyte * frame_head.uBytes).from_address(frame_buffer)
frame_data = np.frombuffer(frame_data_buffer, dtype=np.uint8)
frame = frame_data.reshape((frame_head.iHeight, frame_head.iWidth, 3))
print(f"✅ Frame conversion successful!")
print(f" Frame shape: {frame.shape}")
print(f" Frame dtype: {frame.dtype}")
print(f" Frame size: {frame.size} bytes")
return True
except Exception as e:
print(f"❌ Frame conversion failed: {e}")
return False
def main():
print("🔧 Frame Conversion Test")
print("=" * 40)
success = test_frame_conversion()
if success:
print("\n✅ Frame conversion fix is working correctly!")
print("📋 The streaming issue should be resolved after system restart.")
else:
print("\n❌ Frame conversion fix needs more work.")
print("\n💡 To apply the fix:")
print("1. Restart the USDA vision system")
print("2. Test streaming again")
if __name__ == "__main__":
main()

View File

@@ -0,0 +1,131 @@
#!/usr/bin/env python3
"""
Test script to demonstrate maximum FPS capture functionality.
"""
import requests
import json
import time
from datetime import datetime
BASE_URL = "http://localhost:8000"
def test_fps_modes():
"""Test different FPS modes to demonstrate the functionality"""
print("=" * 60)
print("Testing Maximum FPS Capture Functionality")
print("=" * 60)
# Test configurations
test_configs = [
{
"name": "Normal FPS (3.0)",
"data": {
"filename": "normal_fps_test.avi",
"exposure_ms": 1.0,
"gain": 3.0,
"fps": 3.0
}
},
{
"name": "High FPS (10.0)",
"data": {
"filename": "high_fps_test.avi",
"exposure_ms": 0.5,
"gain": 2.0,
"fps": 10.0
}
},
{
"name": "Maximum FPS (fps=0)",
"data": {
"filename": "max_fps_test.avi",
"exposure_ms": 0.1, # Very short exposure for max speed
"gain": 1.0, # Low gain to avoid overexposure
"fps": 0 # Maximum speed - no delay
}
},
{
"name": "Default FPS (omitted)",
"data": {
"filename": "default_fps_test.avi",
"exposure_ms": 1.0,
"gain": 3.0
# fps omitted - uses camera config default
}
}
]
for i, config in enumerate(test_configs, 1):
print(f"\n{i}. Testing {config['name']}")
print("-" * 40)
# Start recording
try:
response = requests.post(
f"{BASE_URL}/cameras/camera1/start-recording",
json=config['data'],
headers={"Content-Type": "application/json"}
)
if response.status_code == 200:
result = response.json()
if result.get('success'):
print(f"✅ Recording started successfully")
print(f" Filename: {result.get('filename')}")
print(f" Settings: {json.dumps(config['data'], indent=6)}")
# Record for a short time
print(f" Recording for 3 seconds...")
time.sleep(3)
# Stop recording
stop_response = requests.post(f"{BASE_URL}/cameras/camera1/stop-recording")
if stop_response.status_code == 200:
stop_result = stop_response.json()
if stop_result.get('success'):
print(f"✅ Recording stopped successfully")
if 'duration_seconds' in stop_result:
print(f" Duration: {stop_result['duration_seconds']:.1f}s")
else:
print(f"❌ Failed to stop recording: {stop_result.get('message')}")
else:
print(f"❌ Stop request failed: {stop_response.status_code}")
else:
print(f"❌ Recording failed: {result.get('message')}")
else:
print(f"❌ Request failed: {response.status_code} - {response.text}")
except requests.exceptions.ConnectionError:
print(f"❌ Could not connect to {BASE_URL}")
print("Make sure the API server is running with: python main.py")
break
except Exception as e:
print(f"❌ Error: {e}")
# Wait between tests
if i < len(test_configs):
print(" Waiting 2 seconds before next test...")
time.sleep(2)
print("\n" + "=" * 60)
print("FPS Test Summary:")
print("=" * 60)
print("• fps > 0: Controlled frame rate with sleep delay")
print("• fps = 0: MAXIMUM speed capture (no delay between frames)")
print("• fps omitted: Uses camera config default")
print("• Video files with fps=0 are saved with 30 FPS metadata")
print("• Actual capture rate with fps=0 depends on:")
print(" - Camera hardware capabilities")
print(" - Exposure time (shorter = faster)")
print(" - Processing overhead")
print("=" * 60)
if __name__ == "__main__":
print("USDA Vision Camera System - Maximum FPS Test")
print("This script demonstrates fps=0 for maximum capture speed")
print("\nMake sure the system is running with: python main.py")
test_fps_modes()

View File

@@ -0,0 +1,199 @@
#!/usr/bin/env python3
"""
Test script for camera streaming functionality.
This script tests the new streaming capabilities without interfering with recording.
"""
import sys
import os
import time
import requests
import threading
from datetime import datetime
# Add the current directory to Python path
sys.path.insert(0, os.path.dirname(os.path.abspath(__file__)))
def test_api_endpoints():
"""Test the streaming API endpoints"""
base_url = "http://localhost:8000"
print("🧪 Testing Camera Streaming API Endpoints")
print("=" * 50)
# Test system status
try:
response = requests.get(f"{base_url}/system/status", timeout=5)
if response.status_code == 200:
print("✅ System status endpoint working")
data = response.json()
print(f" System: {data.get('status', 'Unknown')}")
print(f" Camera Manager: {'Running' if data.get('camera_manager_running') else 'Stopped'}")
else:
print(f"❌ System status endpoint failed: {response.status_code}")
except Exception as e:
print(f"❌ System status endpoint error: {e}")
# Test camera list
try:
response = requests.get(f"{base_url}/cameras", timeout=5)
if response.status_code == 200:
print("✅ Camera list endpoint working")
cameras = response.json()
print(f" Found {len(cameras)} cameras: {list(cameras.keys())}")
# Test streaming for each camera
for camera_name in cameras.keys():
test_camera_streaming(base_url, camera_name)
else:
print(f"❌ Camera list endpoint failed: {response.status_code}")
except Exception as e:
print(f"❌ Camera list endpoint error: {e}")
def test_camera_streaming(base_url, camera_name):
"""Test streaming for a specific camera"""
print(f"\n🎥 Testing streaming for {camera_name}")
print("-" * 30)
# Test start streaming
try:
response = requests.post(f"{base_url}/cameras/{camera_name}/start-stream", timeout=10)
if response.status_code == 200:
print(f"✅ Start stream endpoint working for {camera_name}")
data = response.json()
print(f" Response: {data.get('message', 'No message')}")
else:
print(f"❌ Start stream failed for {camera_name}: {response.status_code}")
print(f" Error: {response.text}")
return
except Exception as e:
print(f"❌ Start stream error for {camera_name}: {e}")
return
# Wait a moment for stream to initialize
time.sleep(2)
# Test stream endpoint (just check if it responds)
try:
response = requests.get(f"{base_url}/cameras/{camera_name}/stream", timeout=5, stream=True)
if response.status_code == 200:
print(f"✅ Stream endpoint responding for {camera_name}")
print(f" Content-Type: {response.headers.get('content-type', 'Unknown')}")
# Read a small amount of data to verify it's working
chunk_count = 0
for chunk in response.iter_content(chunk_size=1024):
chunk_count += 1
if chunk_count >= 3: # Read a few chunks then stop
break
print(f" Received {chunk_count} data chunks")
else:
print(f"❌ Stream endpoint failed for {camera_name}: {response.status_code}")
except Exception as e:
print(f"❌ Stream endpoint error for {camera_name}: {e}")
# Test stop streaming
try:
response = requests.post(f"{base_url}/cameras/{camera_name}/stop-stream", timeout=5)
if response.status_code == 200:
print(f"✅ Stop stream endpoint working for {camera_name}")
data = response.json()
print(f" Response: {data.get('message', 'No message')}")
else:
print(f"❌ Stop stream failed for {camera_name}: {response.status_code}")
except Exception as e:
print(f"❌ Stop stream error for {camera_name}: {e}")
def test_concurrent_recording_and_streaming():
"""Test that streaming doesn't interfere with recording"""
base_url = "http://localhost:8000"
print("\n🔄 Testing Concurrent Recording and Streaming")
print("=" * 50)
try:
# Get available cameras
response = requests.get(f"{base_url}/cameras", timeout=5)
if response.status_code != 200:
print("❌ Cannot get camera list for concurrent test")
return
cameras = response.json()
if not cameras:
print("❌ No cameras available for concurrent test")
return
camera_name = list(cameras.keys())[0] # Use first camera
print(f"Using camera: {camera_name}")
# Start streaming
print("1. Starting streaming...")
response = requests.post(f"{base_url}/cameras/{camera_name}/start-stream", timeout=10)
if response.status_code != 200:
print(f"❌ Failed to start streaming: {response.text}")
return
time.sleep(2)
# Start recording
print("2. Starting recording...")
response = requests.post(f"{base_url}/cameras/{camera_name}/start-recording",
json={"filename": "test_concurrent_recording.avi"}, timeout=10)
if response.status_code == 200:
print("✅ Recording started successfully while streaming")
else:
print(f"❌ Failed to start recording while streaming: {response.text}")
# Let both run for a few seconds
print("3. Running both streaming and recording for 5 seconds...")
time.sleep(5)
# Stop recording
print("4. Stopping recording...")
response = requests.post(f"{base_url}/cameras/{camera_name}/stop-recording", timeout=5)
if response.status_code == 200:
print("✅ Recording stopped successfully")
else:
print(f"❌ Failed to stop recording: {response.text}")
# Stop streaming
print("5. Stopping streaming...")
response = requests.post(f"{base_url}/cameras/{camera_name}/stop-stream", timeout=5)
if response.status_code == 200:
print("✅ Streaming stopped successfully")
else:
print(f"❌ Failed to stop streaming: {response.text}")
print("✅ Concurrent test completed successfully!")
except Exception as e:
print(f"❌ Concurrent test error: {e}")
def main():
"""Main test function"""
print("🚀 USDA Vision Camera Streaming Test")
print("=" * 50)
print(f"Test started at: {datetime.now().strftime('%Y-%m-%d %H:%M:%S')}")
print()
# Wait for system to be ready
print("⏳ Waiting for system to be ready...")
time.sleep(3)
# Run tests
test_api_endpoints()
test_concurrent_recording_and_streaming()
print("\n" + "=" * 50)
print("🏁 Test completed!")
print("\n📋 Next Steps:")
print("1. Open camera_preview.html in your browser")
print("2. Click 'Start Stream' for any camera")
print("3. Verify live preview works without blocking recording")
print("4. Test concurrent recording and streaming")
if __name__ == "__main__":
main()

77
api/tests/core/check_time.py Executable file
View File

@@ -0,0 +1,77 @@
#!/usr/bin/env python3
"""
Time verification script for USDA Vision Camera System
Checks if system time is properly synchronized
"""
import datetime
import pytz
import requests
import json
def check_system_time():
"""Check system time against multiple sources"""
print("🕐 USDA Vision Camera System - Time Verification")
print("=" * 50)
# Get local time
local_time = datetime.datetime.now()
utc_time = datetime.datetime.utcnow()
# Get Atlanta timezone
atlanta_tz = pytz.timezone('America/New_York')
atlanta_time = datetime.datetime.now(atlanta_tz)
print(f"Local system time: {local_time}")
print(f"UTC time: {utc_time}")
print(f"Atlanta time: {atlanta_time}")
print(f"Timezone: {atlanta_time.tzname()}")
# Check against multiple time APIs for reliability
time_apis = [
{
"name": "WorldTimeAPI",
"url": "http://worldtimeapi.org/api/timezone/America/New_York",
"parser": lambda data: datetime.datetime.fromisoformat(data['datetime'].replace('Z', '+00:00'))
},
{
"name": "WorldClockAPI",
"url": "http://worldclockapi.com/api/json/est/now",
"parser": lambda data: datetime.datetime.fromisoformat(data['currentDateTime'])
}
]
for api in time_apis:
try:
print(f"\n🌐 Checking against {api['name']}...")
response = requests.get(api['url'], timeout=5)
if response.status_code == 200:
data = response.json()
api_time = api['parser'](data)
# Compare times (allow 5 second difference)
time_diff = abs((atlanta_time.replace(tzinfo=None) - api_time.replace(tzinfo=None)).total_seconds())
print(f"API time: {api_time}")
print(f"Time difference: {time_diff:.2f} seconds")
if time_diff < 5:
print("✅ Time is synchronized (within 5 seconds)")
return True
else:
print("❌ Time is NOT synchronized (difference > 5 seconds)")
return False
else:
print(f"⚠️ {api['name']} returned status {response.status_code}")
continue
except Exception as e:
print(f"⚠️ Error checking {api['name']}: {e}")
continue
print("⚠️ Could not reach any time API services")
print("⚠️ This may be due to network connectivity issues")
print("⚠️ System will continue but time synchronization cannot be verified")
return None
if __name__ == "__main__":
check_system_time()

View File

@@ -0,0 +1,56 @@
#!/usr/bin/env python3
"""
Test timezone functionality for the USDA Vision Camera System.
"""
import sys
import os
# Add the current directory to Python path
sys.path.insert(0, os.path.dirname(os.path.abspath(__file__)))
from usda_vision_system.core.timezone_utils import (
now_atlanta, format_atlanta_timestamp, format_filename_timestamp,
check_time_sync, log_time_info
)
import logging
def test_timezone_functions():
"""Test timezone utility functions"""
print("🕐 Testing USDA Vision Camera System Timezone Functions")
print("=" * 60)
# Test current time functions
atlanta_time = now_atlanta()
print(f"Current Atlanta time: {atlanta_time}")
print(f"Timezone: {atlanta_time.tzname()}")
print(f"UTC offset: {atlanta_time.strftime('%z')}")
# Test timestamp formatting
timestamp_str = format_atlanta_timestamp()
filename_str = format_filename_timestamp()
print(f"\nTimestamp formats:")
print(f" Display format: {timestamp_str}")
print(f" Filename format: {filename_str}")
# Test time sync
print(f"\n🔄 Testing time synchronization...")
sync_info = check_time_sync()
print(f"Sync status: {sync_info['sync_status']}")
if sync_info.get('time_diff_seconds') is not None:
print(f"Time difference: {sync_info['time_diff_seconds']:.2f} seconds")
# Test logging
print(f"\n📝 Testing time logging...")
logging.basicConfig(level=logging.INFO)
log_time_info()
print(f"\n✅ All timezone tests completed successfully!")
# Show example filename that would be generated
example_filename = f"camera1_recording_{filename_str}.avi"
print(f"\nExample recording filename: {example_filename}")
if __name__ == "__main__":
test_timezone_functions()

View File

@@ -0,0 +1,225 @@
#!/usr/bin/env python3
"""
Test script for the USDA Vision Camera System.
This script performs basic tests to verify system components are working correctly.
"""
import sys
import os
import time
import json
import requests
from datetime import datetime
# Add the current directory to Python path
sys.path.insert(0, os.path.dirname(os.path.abspath(__file__)))
def test_imports():
"""Test that all modules can be imported"""
print("Testing imports...")
try:
from usda_vision_system.core.config import Config
from usda_vision_system.core.state_manager import StateManager
from usda_vision_system.core.events import EventSystem
from usda_vision_system.mqtt.client import MQTTClient
from usda_vision_system.camera.manager import CameraManager
from usda_vision_system.storage.manager import StorageManager
from usda_vision_system.api.server import APIServer
from usda_vision_system.main import USDAVisionSystem
print("✅ All imports successful")
return True
except Exception as e:
print(f"❌ Import failed: {e}")
return False
def test_configuration():
"""Test configuration loading"""
print("\nTesting configuration...")
try:
from usda_vision_system.core.config import Config
# Test default config
config = Config()
print(f"✅ Default config loaded")
print(f" MQTT broker: {config.mqtt.broker_host}:{config.mqtt.broker_port}")
print(f" Storage path: {config.storage.base_path}")
print(f" Cameras configured: {len(config.cameras)}")
# Test config file if it exists
if os.path.exists("config.json"):
config_file = Config("config.json")
print(f"✅ Config file loaded")
return True
except Exception as e:
print(f"❌ Configuration test failed: {e}")
return False
def test_camera_discovery():
"""Test camera discovery"""
print("\nTesting camera discovery...")
try:
sys.path.append("./camera_sdk")
import mvsdk
devices = mvsdk.CameraEnumerateDevice()
print(f"✅ Camera discovery successful")
print(f" Found {len(devices)} camera(s)")
for i, device in enumerate(devices):
try:
name = device.GetFriendlyName()
port_type = device.GetPortType()
print(f" Camera {i}: {name} ({port_type})")
except Exception as e:
print(f" Camera {i}: Error getting info - {e}")
return True
except Exception as e:
print(f"❌ Camera discovery failed: {e}")
print(" Make sure GigE cameras are connected and camera SDK library is available")
return False
def test_storage_setup():
"""Test storage directory setup"""
print("\nTesting storage setup...")
try:
from usda_vision_system.core.config import Config
from usda_vision_system.storage.manager import StorageManager
from usda_vision_system.core.state_manager import StateManager
config = Config()
state_manager = StateManager()
storage_manager = StorageManager(config, state_manager)
# Test storage statistics
stats = storage_manager.get_storage_statistics()
print(f"✅ Storage manager initialized")
print(f" Base path: {stats.get('base_path', 'Unknown')}")
print(f" Total files: {stats.get('total_files', 0)}")
return True
except Exception as e:
print(f"❌ Storage setup failed: {e}")
return False
def test_mqtt_config():
"""Test MQTT configuration (without connecting)"""
print("\nTesting MQTT configuration...")
try:
from usda_vision_system.core.config import Config
from usda_vision_system.mqtt.client import MQTTClient
from usda_vision_system.core.state_manager import StateManager
from usda_vision_system.core.events import EventSystem
config = Config()
state_manager = StateManager()
event_system = EventSystem()
mqtt_client = MQTTClient(config, state_manager, event_system)
status = mqtt_client.get_status()
print(f"✅ MQTT client initialized")
print(f" Broker: {status['broker_host']}:{status['broker_port']}")
print(f" Topics: {len(status['subscribed_topics'])}")
for topic in status["subscribed_topics"]:
print(f" - {topic}")
return True
except Exception as e:
print(f"❌ MQTT configuration test failed: {e}")
return False
def test_system_initialization():
"""Test full system initialization (without starting)"""
print("\nTesting system initialization...")
try:
from usda_vision_system.main import USDAVisionSystem
# Create system instance
system = USDAVisionSystem()
# Check system status
status = system.get_system_status()
print(f"✅ System initialized successfully")
print(f" Running: {status['running']}")
print(f" Components initialized: {len(status['components'])}")
return True
except Exception as e:
print(f"❌ System initialization failed: {e}")
return False
def test_api_endpoints():
"""Test API endpoints if server is running"""
print("\nTesting API endpoints...")
try:
# Test health endpoint
response = requests.get("http://localhost:8000/health", timeout=5)
if response.status_code == 200:
print("✅ API server is running")
# Test system status endpoint
try:
response = requests.get("http://localhost:8000/system/status", timeout=5)
if response.status_code == 200:
data = response.json()
print(f" System started: {data.get('system_started', False)}")
print(f" MQTT connected: {data.get('mqtt_connected', False)}")
print(f" Active recordings: {data.get('active_recordings', 0)}")
else:
print(f"⚠️ System status endpoint returned {response.status_code}")
except Exception as e:
print(f"⚠️ System status test failed: {e}")
return True
else:
print(f"⚠️ API server returned status {response.status_code}")
return False
except requests.exceptions.ConnectionError:
print("⚠️ API server not running (this is OK if system is not started)")
return True
except Exception as e:
print(f"❌ API test failed: {e}")
return False
def main():
"""Run all tests"""
print("USDA Vision Camera System - Test Suite")
print("=" * 50)
tests = [test_imports, test_configuration, test_camera_discovery, test_storage_setup, test_mqtt_config, test_system_initialization, test_api_endpoints]
passed = 0
total = len(tests)
for test in tests:
try:
if test():
passed += 1
except Exception as e:
print(f"❌ Test {test.__name__} crashed: {e}")
print("\n" + "=" * 50)
print(f"Test Results: {passed}/{total} tests passed")
if passed == total:
print("🎉 All tests passed! System appears to be working correctly.")
return 0
else:
print("⚠️ Some tests failed. Check the output above for details.")
return 1
if __name__ == "__main__":
sys.exit(main())

Binary file not shown.

Binary file not shown.

View File

@@ -0,0 +1,4 @@
Log file created at: 2025/07/28 15:32:15
Running on machine: vision
Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
E0728 15:32:15.057651 191852 MVCAMAPI.cpp:369] CameraInit Failed, err:32774,Version:2.1.0.49,FriendlyName:Blower-Yield-Cam,SN:054012620023

View File

@@ -0,0 +1,4 @@
Log file created at: 2025/07/28 15:32:33
Running on machine: vision
Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
E0728 15:32:33.490923 191852 MVCAMAPI.cpp:369] CameraInit Failed, err:32774,Version:2.1.0.49,FriendlyName:Blower-Yield-Cam,SN:054012620023

View File

@@ -0,0 +1,4 @@
Log file created at: 2025/07/28 15:32:34
Running on machine: vision
Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
E0728 15:32:34.649940 191852 MVCAMAPI.cpp:369] CameraInit Failed, err:32774,Version:2.1.0.49,FriendlyName:Blower-Yield-Cam,SN:054012620023

View File

@@ -0,0 +1,4 @@
Log file created at: 2025/07/28 15:32:39
Running on machine: vision
Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
E0728 15:32:39.753448 191852 MVCAMAPI.cpp:369] CameraInit Failed, err:32774,Version:2.1.0.49,FriendlyName:Blower-Yield-Cam,SN:054012620023

View File

@@ -0,0 +1,4 @@
Log file created at: 2025/07/28 15:32:45
Running on machine: vision
Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
E0728 15:32:45.492905 191852 MVCAMAPI.cpp:369] CameraInit Failed, err:32774,Version:2.1.0.49,FriendlyName:Blower-Yield-Cam,SN:054012620023

View File

@@ -0,0 +1,4 @@
Log file created at: 2025/07/28 15:33:40
Running on machine: vision
Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
E0728 15:33:40.702630 191852 MVCAMAPI.cpp:369] CameraInit Failed, err:32774,Version:2.1.0.49,FriendlyName:Blower-Yield-Cam,SN:054012620023

View File

@@ -0,0 +1,4 @@
Log file created at: 2025/07/28 15:34:18
Running on machine: vision
Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
E0728 15:34:18.442386 191852 MVCAMAPI.cpp:369] CameraInit Failed, err:32774,Version:2.1.0.49,FriendlyName:Blower-Yield-Cam,SN:054012620023

View File

@@ -0,0 +1,4 @@
Log file created at: 2025/07/28 15:34:28
Running on machine: vision
Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
E0728 15:34:28.207051 191852 MVCAMAPI.cpp:369] CameraInit Failed, err:32774,Version:2.1.0.49,FriendlyName:Blower-Yield-Cam,SN:054012620023

View File

@@ -0,0 +1,4 @@
Log file created at: 2025/07/28 15:34:53
Running on machine: vision
Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
E0728 15:34:53.315912 191852 MVCAMAPI.cpp:369] CameraInit Failed, err:32774,Version:2.1.0.49,FriendlyName:Blower-Yield-Cam,SN:054012620023

View File

@@ -0,0 +1,4 @@
Log file created at: 2025/07/28 15:35:00
Running on machine: vision
Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
E0728 15:35:00.929268 191852 MVCAMAPI.cpp:369] CameraInit Failed, err:32774,Version:2.1.0.49,FriendlyName:Blower-Yield-Cam,SN:054012620023

View File

@@ -0,0 +1,4 @@
Log file created at: 2025/07/28 15:35:32
Running on machine: vision
Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
E0728 15:35:32.169682 191852 MVCAMAPI.cpp:369] CameraInit Failed, err:32774,Version:2.1.0.49,FriendlyName:Blower-Yield-Cam,SN:054012620023

View File

@@ -0,0 +1,4 @@
Log file created at: 2025/07/28 15:35:34
Running on machine: vision
Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
E0728 15:35:34.519351 191852 MVCAMAPI.cpp:369] CameraInit Failed, err:32774,Version:2.1.0.49,FriendlyName:Blower-Yield-Cam,SN:054012620023

View File

@@ -0,0 +1,291 @@
# coding=utf-8
"""
Simple GigE Camera Capture Script
Captures 10 images every 200 milliseconds and saves them to the images directory.
"""
import os
import time
import numpy as np
import cv2
import platform
from datetime import datetime
import sys
sys.path.append("./python demo")
import mvsdk
def is_camera_ready_for_capture():
"""
Check if camera is ready for capture.
Returns: (ready: bool, message: str, camera_info: object or None)
"""
try:
# Initialize SDK
mvsdk.CameraSdkInit(1)
# Enumerate cameras
DevList = mvsdk.CameraEnumerateDevice()
if len(DevList) < 1:
return False, "No cameras found", None
DevInfo = DevList[0]
# Check if already opened
try:
if mvsdk.CameraIsOpened(DevInfo):
return False, f"Camera '{DevInfo.GetFriendlyName()}' is already opened by another process", DevInfo
except:
pass # Some cameras might not support this check
# Try to initialize
try:
hCamera = mvsdk.CameraInit(DevInfo, -1, -1)
# Quick capture test
try:
# Basic setup
mvsdk.CameraSetTriggerMode(hCamera, 0)
mvsdk.CameraPlay(hCamera)
# Try to get one frame with short timeout
pRawData, FrameHead = mvsdk.CameraGetImageBuffer(hCamera, 500) # 0.5 second timeout
mvsdk.CameraReleaseImageBuffer(hCamera, pRawData)
# Success - close and return
mvsdk.CameraUnInit(hCamera)
return True, f"Camera '{DevInfo.GetFriendlyName()}' is ready for capture", DevInfo
except mvsdk.CameraException as e:
mvsdk.CameraUnInit(hCamera)
if e.error_code == mvsdk.CAMERA_STATUS_TIME_OUT:
return False, "Camera timeout - may be busy or not streaming properly", DevInfo
else:
return False, f"Camera capture test failed: {e.message}", DevInfo
except mvsdk.CameraException as e:
if e.error_code == mvsdk.CAMERA_STATUS_DEVICE_IS_OPENED:
return False, f"Camera '{DevInfo.GetFriendlyName()}' is already in use", DevInfo
elif e.error_code == mvsdk.CAMERA_STATUS_ACCESS_DENY:
return False, f"Access denied to camera '{DevInfo.GetFriendlyName()}'", DevInfo
else:
return False, f"Camera initialization failed: {e.message}", DevInfo
except Exception as e:
return False, f"Camera check failed: {str(e)}", None
def get_camera_ranges(hCamera):
"""
Get the available ranges for camera settings
"""
try:
# Get exposure time range
exp_min, exp_max, exp_step = mvsdk.CameraGetExposureTimeRange(hCamera)
print(f"Exposure time range: {exp_min:.1f} - {exp_max:.1f} μs (step: {exp_step:.1f})")
# Get analog gain range
gain_min, gain_max, gain_step = mvsdk.CameraGetAnalogGainXRange(hCamera)
print(f"Analog gain range: {gain_min:.2f} - {gain_max:.2f}x (step: {gain_step:.3f})")
return (exp_min, exp_max, exp_step), (gain_min, gain_max, gain_step)
except Exception as e:
print(f"Could not get camera ranges: {e}")
return None, None
def capture_images(exposure_time_us=2000, analog_gain=1.0):
"""
Main function to capture images from GigE camera
Parameters:
- exposure_time_us: Exposure time in microseconds (default: 2000 = 2ms)
- analog_gain: Analog gain multiplier (default: 1.0)
"""
# Check if camera is ready for capture
print("Checking camera availability...")
ready, message, camera_info = is_camera_ready_for_capture()
if not ready:
print(f"❌ Camera not ready: {message}")
print("\nPossible solutions:")
print("- Close any other camera applications (preview software, etc.)")
print("- Check camera connection and power")
print("- Wait a moment and try again")
return False
print(f"{message}")
# Initialize SDK (already done in status check, but ensure it's ready)
try:
mvsdk.CameraSdkInit(1) # Initialize SDK with English language
except Exception as e:
print(f"SDK initialization failed: {e}")
return False
# Enumerate cameras
DevList = mvsdk.CameraEnumerateDevice()
nDev = len(DevList)
if nDev < 1:
print("No camera was found!")
return False
print(f"Found {nDev} camera(s):")
for i, DevInfo in enumerate(DevList):
print(f"{i}: {DevInfo.GetFriendlyName()} {DevInfo.GetPortType()}")
# Select camera (use first one if only one available)
camera_index = 0 if nDev == 1 else int(input("Select camera index: "))
DevInfo = DevList[camera_index]
print(f"Selected camera: {DevInfo.GetFriendlyName()}")
# Initialize camera
hCamera = 0
try:
hCamera = mvsdk.CameraInit(DevInfo, -1, -1)
print("Camera initialized successfully")
except mvsdk.CameraException as e:
print(f"CameraInit Failed({e.error_code}): {e.message}")
return False
try:
# Get camera capabilities
cap = mvsdk.CameraGetCapability(hCamera)
# Check if it's a mono or color camera
monoCamera = cap.sIspCapacity.bMonoSensor != 0
print(f"Camera type: {'Monochrome' if monoCamera else 'Color'}")
# Get camera ranges
exp_range, gain_range = get_camera_ranges(hCamera)
# Set output format
if monoCamera:
mvsdk.CameraSetIspOutFormat(hCamera, mvsdk.CAMERA_MEDIA_TYPE_MONO8)
else:
mvsdk.CameraSetIspOutFormat(hCamera, mvsdk.CAMERA_MEDIA_TYPE_BGR8)
# Set camera to continuous capture mode
mvsdk.CameraSetTriggerMode(hCamera, 0)
# Set manual exposure with improved control
mvsdk.CameraSetAeState(hCamera, 0) # Disable auto exposure
# Clamp exposure time to valid range
if exp_range:
exp_min, exp_max, exp_step = exp_range
exposure_time_us = max(exp_min, min(exp_max, exposure_time_us))
mvsdk.CameraSetExposureTime(hCamera, exposure_time_us)
print(f"Set exposure time: {exposure_time_us/1000:.1f}ms")
# Set analog gain
if gain_range:
gain_min, gain_max, gain_step = gain_range
analog_gain = max(gain_min, min(gain_max, analog_gain))
try:
mvsdk.CameraSetAnalogGainX(hCamera, analog_gain)
print(f"Set analog gain: {analog_gain:.2f}x")
except Exception as e:
print(f"Could not set analog gain: {e}")
# Start camera
mvsdk.CameraPlay(hCamera)
print("Camera started")
# Calculate frame buffer size
FrameBufferSize = cap.sResolutionRange.iWidthMax * cap.sResolutionRange.iHeightMax * (1 if monoCamera else 3)
# Allocate frame buffer
pFrameBuffer = mvsdk.CameraAlignMalloc(FrameBufferSize, 16)
# Create images directory if it doesn't exist
if not os.path.exists("images"):
os.makedirs("images")
print("Starting image capture...")
print("Capturing 10 images with 200ms intervals...")
# Capture 10 images
for i in range(10):
try:
# Get image from camera (timeout: 2000ms)
pRawData, FrameHead = mvsdk.CameraGetImageBuffer(hCamera, 2000)
# Process the raw image data
mvsdk.CameraImageProcess(hCamera, pRawData, pFrameBuffer, FrameHead)
# Release the raw data buffer
mvsdk.CameraReleaseImageBuffer(hCamera, pRawData)
# Handle Windows image flip (images are upside down on Windows)
if platform.system() == "Windows":
mvsdk.CameraFlipFrameBuffer(pFrameBuffer, FrameHead, 1)
# Convert to numpy array for OpenCV
frame_data = (mvsdk.c_ubyte * FrameHead.uBytes).from_address(pFrameBuffer)
frame = np.frombuffer(frame_data, dtype=np.uint8)
# Reshape based on camera type
if FrameHead.uiMediaType == mvsdk.CAMERA_MEDIA_TYPE_MONO8:
frame = frame.reshape((FrameHead.iHeight, FrameHead.iWidth))
else:
frame = frame.reshape((FrameHead.iHeight, FrameHead.iWidth, 3))
# Generate filename with timestamp
timestamp = datetime.now().strftime("%Y%m%d_%H%M%S_%f")[:-3] # milliseconds
filename = f"images/image_{i+1:02d}_{timestamp}.jpg"
# Save image using OpenCV
success = cv2.imwrite(filename, frame)
if success:
print(f"Image {i+1}/10 saved: {filename} ({FrameHead.iWidth}x{FrameHead.iHeight})")
else:
print(f"Failed to save image {i+1}/10")
# Wait 200ms before next capture (except for the last image)
if i < 9:
time.sleep(0.2)
except mvsdk.CameraException as e:
print(f"Failed to capture image {i+1}/10 ({e.error_code}): {e.message}")
continue
print("Image capture completed!")
# Cleanup
mvsdk.CameraAlignFree(pFrameBuffer)
finally:
# Close camera
mvsdk.CameraUnInit(hCamera)
print("Camera closed")
return True
if __name__ == "__main__":
print("GigE Camera Image Capture Script")
print("=" * 40)
print("Note: If images are overexposed, you can adjust the exposure settings:")
print("- Lower exposure_time_us for darker images (e.g., 1000-5000)")
print("- Lower analog_gain for less amplification (e.g., 0.5-2.0)")
print()
# for cracker
# You can adjust these values to fix overexposure:
success = capture_images(exposure_time_us=6000, analog_gain=16.0) # 2ms exposure (much lower than default 30ms) # 1x gain (no amplification)
# for blower
success = capture_images(exposure_time_us=1000, analog_gain=3.5) # 2ms exposure (much lower than default 30ms) # 1x gain (no amplification)
if success:
print("\nCapture completed successfully!")
print("Images saved in the 'images' directory")
else:
print("\nCapture failed!")
input("Press Enter to exit...")

View File

@@ -0,0 +1,439 @@
# coding=utf-8
import cv2
import numpy as np
import platform
import time
import threading
from datetime import datetime
import os
import sys
# Add the python demo directory to path to import mvsdk
sys.path.append("python demo")
import mvsdk
class CameraVideoRecorder:
def __init__(self):
self.hCamera = 0
self.pFrameBuffer = 0
self.cap = None
self.monoCamera = False
self.recording = False
self.video_writer = None
self.frame_count = 0
self.start_time = None
def list_cameras(self):
"""List all available cameras"""
try:
# Initialize SDK
mvsdk.CameraSdkInit(1)
except Exception as e:
print(f"SDK initialization failed: {e}")
return []
# Enumerate cameras
DevList = mvsdk.CameraEnumerateDevice()
nDev = len(DevList)
if nDev < 1:
print("No cameras found!")
return []
print(f"\nFound {nDev} camera(s):")
cameras = []
for i, DevInfo in enumerate(DevList):
camera_info = {"index": i, "name": DevInfo.GetFriendlyName(), "port_type": DevInfo.GetPortType(), "serial": DevInfo.GetSn(), "dev_info": DevInfo}
cameras.append(camera_info)
print(f"{i}: {camera_info['name']} ({camera_info['port_type']}) - SN: {camera_info['serial']}")
return cameras
def initialize_camera(self, dev_info, exposure_ms=1.0, gain=3.5, target_fps=3.0):
"""Initialize camera with specified settings"""
self.target_fps = target_fps
try:
# Initialize camera
self.hCamera = mvsdk.CameraInit(dev_info, -1, -1)
print(f"Camera initialized successfully")
# Get camera capabilities
self.cap = mvsdk.CameraGetCapability(self.hCamera)
self.monoCamera = self.cap.sIspCapacity.bMonoSensor != 0
print(f"Camera type: {'Monochrome' if self.monoCamera else 'Color'}")
# Set output format
if self.monoCamera:
mvsdk.CameraSetIspOutFormat(self.hCamera, mvsdk.CAMERA_MEDIA_TYPE_MONO8)
else:
mvsdk.CameraSetIspOutFormat(self.hCamera, mvsdk.CAMERA_MEDIA_TYPE_BGR8)
# Calculate RGB buffer size
FrameBufferSize = self.cap.sResolutionRange.iWidthMax * self.cap.sResolutionRange.iHeightMax * (1 if self.monoCamera else 3)
# Allocate RGB buffer
self.pFrameBuffer = mvsdk.CameraAlignMalloc(FrameBufferSize, 16)
# Set camera to continuous capture mode
mvsdk.CameraSetTriggerMode(self.hCamera, 0)
# Set manual exposure
mvsdk.CameraSetAeState(self.hCamera, 0) # Disable auto exposure
exposure_time_us = exposure_ms * 1000 # Convert ms to microseconds
# Get exposure range and clamp value
try:
exp_min, exp_max, exp_step = mvsdk.CameraGetExposureTimeRange(self.hCamera)
exposure_time_us = max(exp_min, min(exp_max, exposure_time_us))
print(f"Exposure range: {exp_min:.1f} - {exp_max:.1f} μs")
except Exception as e:
print(f"Could not get exposure range: {e}")
mvsdk.CameraSetExposureTime(self.hCamera, exposure_time_us)
print(f"Set exposure time: {exposure_time_us/1000:.1f}ms")
# Set analog gain
try:
gain_min, gain_max, gain_step = mvsdk.CameraGetAnalogGainXRange(self.hCamera)
gain = max(gain_min, min(gain_max, gain))
mvsdk.CameraSetAnalogGainX(self.hCamera, gain)
print(f"Set analog gain: {gain:.2f}x (range: {gain_min:.2f} - {gain_max:.2f})")
except Exception as e:
print(f"Could not set analog gain: {e}")
# Start camera
mvsdk.CameraPlay(self.hCamera)
print("Camera started successfully")
return True
except mvsdk.CameraException as e:
print(f"Camera initialization failed({e.error_code}): {e.message}")
return False
def start_recording(self, output_filename=None):
"""Start video recording"""
if self.recording:
print("Already recording!")
return False
if not output_filename:
timestamp = datetime.now().strftime("%Y%m%d_%H%M%S")
output_filename = f"video_{timestamp}.avi"
# Create output directory if it doesn't exist
os.makedirs(os.path.dirname(output_filename) if os.path.dirname(output_filename) else ".", exist_ok=True)
# Get first frame to determine video properties
try:
pRawData, FrameHead = mvsdk.CameraGetImageBuffer(self.hCamera, 2000)
mvsdk.CameraImageProcess(self.hCamera, pRawData, self.pFrameBuffer, FrameHead)
mvsdk.CameraReleaseImageBuffer(self.hCamera, pRawData)
# Handle Windows frame flipping
if platform.system() == "Windows":
mvsdk.CameraFlipFrameBuffer(self.pFrameBuffer, FrameHead, 1)
# Convert to numpy array
frame_data = (mvsdk.c_ubyte * FrameHead.uBytes).from_address(self.pFrameBuffer)
frame = np.frombuffer(frame_data, dtype=np.uint8)
if self.monoCamera:
frame = frame.reshape((FrameHead.iHeight, FrameHead.iWidth))
# Convert mono to BGR for video writer
frame = cv2.cvtColor(frame, cv2.COLOR_GRAY2BGR)
else:
frame = frame.reshape((FrameHead.iHeight, FrameHead.iWidth, 3))
except mvsdk.CameraException as e:
print(f"Failed to get initial frame: {e.message}")
return False
# Initialize video writer
fourcc = cv2.VideoWriter_fourcc(*"XVID")
fps = getattr(self, "target_fps", 3.0) # Use configured FPS or default to 3.0
frame_size = (FrameHead.iWidth, FrameHead.iHeight)
self.video_writer = cv2.VideoWriter(output_filename, fourcc, fps, frame_size)
if not self.video_writer.isOpened():
print(f"Failed to open video writer for {output_filename}")
return False
self.recording = True
self.frame_count = 0
self.start_time = time.time()
self.output_filename = output_filename
print(f"Started recording to: {output_filename}")
print(f"Frame size: {frame_size}, FPS: {fps}")
print("Press 'q' to stop recording...")
return True
def stop_recording(self):
"""Stop video recording"""
if not self.recording:
print("Not currently recording!")
return False
self.recording = False
if self.video_writer:
self.video_writer.release()
self.video_writer = None
duration = time.time() - self.start_time if self.start_time else 0
avg_fps = self.frame_count / duration if duration > 0 else 0
print(f"\nRecording stopped!")
print(f"Saved: {self.output_filename}")
print(f"Frames recorded: {self.frame_count}")
print(f"Duration: {duration:.1f} seconds")
print(f"Average FPS: {avg_fps:.1f}")
return True
def record_loop(self):
"""Main recording loop"""
if not self.recording:
return
print("Recording... Press 'q' in the preview window to stop")
while self.recording:
try:
# Get frame from camera
pRawData, FrameHead = mvsdk.CameraGetImageBuffer(self.hCamera, 200)
mvsdk.CameraImageProcess(self.hCamera, pRawData, self.pFrameBuffer, FrameHead)
mvsdk.CameraReleaseImageBuffer(self.hCamera, pRawData)
# Handle Windows frame flipping
if platform.system() == "Windows":
mvsdk.CameraFlipFrameBuffer(self.pFrameBuffer, FrameHead, 1)
# Convert to numpy array
frame_data = (mvsdk.c_ubyte * FrameHead.uBytes).from_address(self.pFrameBuffer)
frame = np.frombuffer(frame_data, dtype=np.uint8)
if self.monoCamera:
frame = frame.reshape((FrameHead.iHeight, FrameHead.iWidth))
frame_bgr = cv2.cvtColor(frame, cv2.COLOR_GRAY2BGR)
else:
frame = frame.reshape((FrameHead.iHeight, FrameHead.iWidth, 3))
frame_bgr = frame
# Write every frame to video (FPS is controlled by video file playback rate)
if self.video_writer and self.recording:
self.video_writer.write(frame_bgr)
self.frame_count += 1
# Show preview (resized for display)
display_frame = cv2.resize(frame_bgr, (640, 480), interpolation=cv2.INTER_LINEAR)
# Add small delay to control capture rate based on target FPS
target_fps = getattr(self, "target_fps", 3.0)
time.sleep(1.0 / target_fps)
# Add recording indicator
cv2.putText(display_frame, f"REC - Frame: {self.frame_count}", (10, 30), cv2.FONT_HERSHEY_SIMPLEX, 0.7, (0, 0, 255), 2)
cv2.imshow("Camera Recording - Press 'q' to stop", display_frame)
# Check for quit key
if cv2.waitKey(1) & 0xFF == ord("q"):
self.stop_recording()
break
except mvsdk.CameraException as e:
if e.error_code != mvsdk.CAMERA_STATUS_TIME_OUT:
print(f"Camera error: {e.message}")
break
def cleanup(self):
"""Clean up resources"""
if self.recording:
self.stop_recording()
if self.video_writer:
self.video_writer.release()
if self.hCamera > 0:
mvsdk.CameraUnInit(self.hCamera)
self.hCamera = 0
if self.pFrameBuffer:
mvsdk.CameraAlignFree(self.pFrameBuffer)
self.pFrameBuffer = 0
cv2.destroyAllWindows()
def interactive_menu():
"""Interactive menu for camera operations"""
recorder = CameraVideoRecorder()
try:
# List available cameras
cameras = recorder.list_cameras()
if not cameras:
return
# Select camera
if len(cameras) == 1:
selected_camera = cameras[0]
print(f"\nUsing camera: {selected_camera['name']}")
else:
while True:
try:
choice = int(input(f"\nSelect camera (0-{len(cameras)-1}): "))
if 0 <= choice < len(cameras):
selected_camera = cameras[choice]
break
else:
print("Invalid selection!")
except ValueError:
print("Please enter a valid number!")
# Get camera settings from user
print(f"\nCamera Settings:")
try:
exposure = float(input("Enter exposure time in ms (default 1.0): ") or "1.0")
gain = float(input("Enter gain value (default 3.5): ") or "3.5")
fps = float(input("Enter target FPS (default 3.0): ") or "3.0")
except ValueError:
print("Using default values: exposure=1.0ms, gain=3.5x, fps=3.0")
exposure, gain, fps = 1.0, 3.5, 3.0
# Initialize camera with specified settings
print(f"\nInitializing camera with:")
print(f"- Exposure: {exposure}ms")
print(f"- Gain: {gain}x")
print(f"- Target FPS: {fps}")
if not recorder.initialize_camera(selected_camera["dev_info"], exposure_ms=exposure, gain=gain, target_fps=fps):
return
# Menu loop
while True:
print(f"\n{'='*50}")
print("Camera Video Recorder Menu")
print(f"{'='*50}")
print("1. Start Recording")
print("2. List Camera Info")
print("3. Test Camera (Live Preview)")
print("4. Exit")
try:
choice = input("\nSelect option (1-4): ").strip()
if choice == "1":
# Start recording
timestamp = datetime.now().strftime("%Y%m%d_%H%M%S")
output_file = f"videos/camera_recording_{timestamp}.avi"
# Create videos directory
os.makedirs("videos", exist_ok=True)
if recorder.start_recording(output_file):
recorder.record_loop()
elif choice == "2":
# Show camera info
print(f"\nCamera Information:")
print(f"Name: {selected_camera['name']}")
print(f"Port Type: {selected_camera['port_type']}")
print(f"Serial Number: {selected_camera['serial']}")
print(f"Type: {'Monochrome' if recorder.monoCamera else 'Color'}")
elif choice == "3":
# Live preview
print("\nLive Preview - Press 'q' to stop")
preview_loop(recorder)
elif choice == "4":
print("Exiting...")
break
else:
print("Invalid option! Please select 1-4.")
except KeyboardInterrupt:
print("\nReturning to menu...")
continue
except KeyboardInterrupt:
print("\nInterrupted by user")
except Exception as e:
print(f"Error: {e}")
import traceback
traceback.print_exc()
finally:
recorder.cleanup()
print("Cleanup completed")
def preview_loop(recorder):
"""Live preview without recording"""
print("Live preview mode - Press 'q' to return to menu")
while True:
try:
# Get frame from camera
pRawData, FrameHead = mvsdk.CameraGetImageBuffer(recorder.hCamera, 200)
mvsdk.CameraImageProcess(recorder.hCamera, pRawData, recorder.pFrameBuffer, FrameHead)
mvsdk.CameraReleaseImageBuffer(recorder.hCamera, pRawData)
# Handle Windows frame flipping
if platform.system() == "Windows":
mvsdk.CameraFlipFrameBuffer(recorder.pFrameBuffer, FrameHead, 1)
# Convert to numpy array
frame_data = (mvsdk.c_ubyte * FrameHead.uBytes).from_address(recorder.pFrameBuffer)
frame = np.frombuffer(frame_data, dtype=np.uint8)
if recorder.monoCamera:
frame = frame.reshape((FrameHead.iHeight, FrameHead.iWidth))
frame_bgr = cv2.cvtColor(frame, cv2.COLOR_GRAY2BGR)
else:
frame = frame.reshape((FrameHead.iHeight, FrameHead.iWidth, 3))
frame_bgr = frame
# Show preview (resized for display)
display_frame = cv2.resize(frame_bgr, (640, 480), interpolation=cv2.INTER_LINEAR)
# Add info overlay
cv2.putText(display_frame, f"PREVIEW - {FrameHead.iWidth}x{FrameHead.iHeight}", (10, 30), cv2.FONT_HERSHEY_SIMPLEX, 0.7, (0, 255, 0), 2)
cv2.putText(display_frame, "Press 'q' to return to menu", (10, display_frame.shape[0] - 20), cv2.FONT_HERSHEY_SIMPLEX, 0.5, (255, 255, 255), 1)
cv2.imshow("Camera Preview", display_frame)
# Check for quit key
if cv2.waitKey(1) & 0xFF == ord("q"):
cv2.destroyWindow("Camera Preview")
break
except mvsdk.CameraException as e:
if e.error_code != mvsdk.CAMERA_STATUS_TIME_OUT:
print(f"Camera error: {e.message}")
break
def main():
print("Camera Video Recorder")
print("====================")
print("This script allows you to:")
print("- List all available cameras")
print("- Record videos with custom exposure (1ms), gain (3.5x), and FPS (3.0) settings")
print("- Save videos with timestamps")
print("- Stop recording anytime with 'q' key")
print()
interactive_menu()
if __name__ == "__main__":
main()

View File

@@ -0,0 +1,6 @@
def main():
print("Hello from usda-vision-cameras!")
if __name__ == "__main__":
main()

View File

@@ -0,0 +1,197 @@
#coding=utf-8
"""
Test script to help find optimal exposure settings for your GigE camera.
This script captures a single test image with different exposure settings.
"""
import os
import sys
import mvsdk
import numpy as np
import cv2
import platform
from datetime import datetime
# Add the python demo directory to path
sys.path.append('./python demo')
def test_exposure_settings():
"""
Test different exposure settings to find optimal values
"""
# Initialize SDK
try:
mvsdk.CameraSdkInit(1)
print("SDK initialized successfully")
except Exception as e:
print(f"SDK initialization failed: {e}")
return False
# Enumerate cameras
DevList = mvsdk.CameraEnumerateDevice()
nDev = len(DevList)
if nDev < 1:
print("No camera was found!")
return False
print(f"Found {nDev} camera(s):")
for i, DevInfo in enumerate(DevList):
print(f" {i}: {DevInfo.GetFriendlyName()} ({DevInfo.GetPortType()})")
# Use first camera
DevInfo = DevList[0]
print(f"\nSelected camera: {DevInfo.GetFriendlyName()}")
# Initialize camera
try:
hCamera = mvsdk.CameraInit(DevInfo, -1, -1)
print("Camera initialized successfully")
except mvsdk.CameraException as e:
print(f"CameraInit Failed({e.error_code}): {e.message}")
return False
try:
# Get camera capabilities
cap = mvsdk.CameraGetCapability(hCamera)
monoCamera = (cap.sIspCapacity.bMonoSensor != 0)
print(f"Camera type: {'Monochrome' if monoCamera else 'Color'}")
# Get camera ranges
try:
exp_min, exp_max, exp_step = mvsdk.CameraGetExposureTimeRange(hCamera)
print(f"Exposure time range: {exp_min:.1f} - {exp_max:.1f} μs")
gain_min, gain_max, gain_step = mvsdk.CameraGetAnalogGainXRange(hCamera)
print(f"Analog gain range: {gain_min:.2f} - {gain_max:.2f}x")
except Exception as e:
print(f"Could not get camera ranges: {e}")
exp_min, exp_max = 100, 100000
gain_min, gain_max = 1.0, 4.0
# Set output format
if monoCamera:
mvsdk.CameraSetIspOutFormat(hCamera, mvsdk.CAMERA_MEDIA_TYPE_MONO8)
else:
mvsdk.CameraSetIspOutFormat(hCamera, mvsdk.CAMERA_MEDIA_TYPE_BGR8)
# Set camera to continuous capture mode
mvsdk.CameraSetTriggerMode(hCamera, 0)
mvsdk.CameraSetAeState(hCamera, 0) # Disable auto exposure
# Start camera
mvsdk.CameraPlay(hCamera)
# Allocate frame buffer
FrameBufferSize = cap.sResolutionRange.iWidthMax * cap.sResolutionRange.iHeightMax * (1 if monoCamera else 3)
pFrameBuffer = mvsdk.CameraAlignMalloc(FrameBufferSize, 16)
# Create test directory
if not os.path.exists("exposure_tests"):
os.makedirs("exposure_tests")
print("\nTesting different exposure settings...")
print("=" * 50)
# Test different exposure times (in microseconds)
exposure_times = [500, 1000, 2000, 5000, 10000, 20000] # 0.5ms to 20ms
analog_gains = [1.0] # Start with 1x gain
test_count = 0
for exp_time in exposure_times:
for gain in analog_gains:
# Clamp values to valid ranges
exp_time = max(exp_min, min(exp_max, exp_time))
gain = max(gain_min, min(gain_max, gain))
print(f"\nTest {test_count + 1}: Exposure={exp_time/1000:.1f}ms, Gain={gain:.1f}x")
# Set camera parameters
mvsdk.CameraSetExposureTime(hCamera, exp_time)
try:
mvsdk.CameraSetAnalogGainX(hCamera, gain)
except:
pass # Some cameras might not support this
# Wait a moment for settings to take effect
import time
time.sleep(0.1)
# Capture image
try:
pRawData, FrameHead = mvsdk.CameraGetImageBuffer(hCamera, 2000)
mvsdk.CameraImageProcess(hCamera, pRawData, pFrameBuffer, FrameHead)
mvsdk.CameraReleaseImageBuffer(hCamera, pRawData)
# Handle Windows image flip
if platform.system() == "Windows":
mvsdk.CameraFlipFrameBuffer(pFrameBuffer, FrameHead, 1)
# Convert to numpy array
frame_data = (mvsdk.c_ubyte * FrameHead.uBytes).from_address(pFrameBuffer)
frame = np.frombuffer(frame_data, dtype=np.uint8)
if FrameHead.uiMediaType == mvsdk.CAMERA_MEDIA_TYPE_MONO8:
frame = frame.reshape((FrameHead.iHeight, FrameHead.iWidth))
else:
frame = frame.reshape((FrameHead.iHeight, FrameHead.iWidth, 3))
# Calculate image statistics
mean_brightness = np.mean(frame)
max_brightness = np.max(frame)
# Save image
filename = f"exposure_tests/test_{test_count+1:02d}_exp{exp_time/1000:.1f}ms_gain{gain:.1f}x.jpg"
cv2.imwrite(filename, frame)
# Provide feedback
status = ""
if mean_brightness < 50:
status = "TOO DARK"
elif mean_brightness > 200:
status = "TOO BRIGHT"
elif max_brightness >= 255:
status = "OVEREXPOSED"
else:
status = "GOOD"
print(f" → Saved: {filename}")
print(f" → Brightness: mean={mean_brightness:.1f}, max={max_brightness:.1f} [{status}]")
test_count += 1
except mvsdk.CameraException as e:
print(f" → Failed to capture: {e.message}")
print(f"\nCompleted {test_count} test captures!")
print("Check the 'exposure_tests' directory to see the results.")
print("\nRecommendations:")
print("- Look for images marked as 'GOOD' - these have optimal exposure")
print("- If all images are 'TOO BRIGHT', try lower exposure times or gains")
print("- If all images are 'TOO DARK', try higher exposure times or gains")
print("- Avoid 'OVEREXPOSED' images as they have clipped highlights")
# Cleanup
mvsdk.CameraAlignFree(pFrameBuffer)
finally:
# Close camera
mvsdk.CameraUnInit(hCamera)
print("\nCamera closed")
return True
if __name__ == "__main__":
print("GigE Camera Exposure Test Script")
print("=" * 40)
print("This script will test different exposure settings and save sample images.")
print("Use this to find the optimal settings for your lighting conditions.")
print()
success = test_exposure_settings()
if success:
print("\nTesting completed successfully!")
else:
print("\nTesting failed!")
input("Press Enter to exit...")

View File

@@ -0,0 +1,117 @@
#!/usr/bin/env python3
"""
Test script to demonstrate enhanced MQTT logging and API endpoints.
This script shows:
1. Enhanced console logging for MQTT events
2. New MQTT status API endpoint
3. Machine status API endpoint
"""
import sys
import os
import time
import requests
import json
from datetime import datetime
# Add the current directory to Python path
sys.path.insert(0, os.path.dirname(os.path.abspath(__file__)))
def test_api_endpoints():
"""Test the API endpoints for MQTT and machine status"""
base_url = "http://localhost:8000"
print("🧪 Testing API Endpoints...")
print("=" * 50)
# Test system status
try:
print("\n📊 System Status:")
response = requests.get(f"{base_url}/system/status", timeout=5)
if response.status_code == 200:
data = response.json()
print(f" System Started: {data.get('system_started')}")
print(f" MQTT Connected: {data.get('mqtt_connected')}")
print(f" Last MQTT Message: {data.get('last_mqtt_message')}")
print(f" Active Recordings: {data.get('active_recordings')}")
print(f" Total Recordings: {data.get('total_recordings')}")
else:
print(f" ❌ Error: {response.status_code}")
except Exception as e:
print(f" ❌ Connection Error: {e}")
# Test MQTT status
try:
print("\n📡 MQTT Status:")
response = requests.get(f"{base_url}/mqtt/status", timeout=5)
if response.status_code == 200:
data = response.json()
print(f" Connected: {data.get('connected')}")
print(f" Broker: {data.get('broker_host')}:{data.get('broker_port')}")
print(f" Message Count: {data.get('message_count')}")
print(f" Error Count: {data.get('error_count')}")
print(f" Last Message: {data.get('last_message_time')}")
print(f" Uptime: {data.get('uptime_seconds'):.1f}s" if data.get('uptime_seconds') else " Uptime: N/A")
print(f" Subscribed Topics:")
for topic in data.get('subscribed_topics', []):
print(f" - {topic}")
else:
print(f" ❌ Error: {response.status_code}")
except Exception as e:
print(f" ❌ Connection Error: {e}")
# Test machine status
try:
print("\n🏭 Machine Status:")
response = requests.get(f"{base_url}/machines", timeout=5)
if response.status_code == 200:
data = response.json()
if data:
for machine_name, machine_info in data.items():
print(f" {machine_name}:")
print(f" State: {machine_info.get('state')}")
print(f" Last Updated: {machine_info.get('last_updated')}")
print(f" Last Message: {machine_info.get('last_message')}")
print(f" MQTT Topic: {machine_info.get('mqtt_topic')}")
else:
print(" No machines found")
else:
print(f" ❌ Error: {response.status_code}")
except Exception as e:
print(f" ❌ Connection Error: {e}")
def main():
"""Main test function"""
print("🔍 MQTT Logging and API Test")
print("=" * 50)
print()
print("This script tests the enhanced MQTT logging and new API endpoints.")
print("Make sure the USDA Vision System is running before testing.")
print()
# Wait a moment
time.sleep(1)
# Test API endpoints
test_api_endpoints()
print("\n" + "=" * 50)
print("✅ Test completed!")
print()
print("📝 What to expect when running the system:")
print(" 🔗 MQTT CONNECTED: [broker_host:port]")
print(" 📋 MQTT SUBSCRIBED: [machine] → [topic]")
print(" 📡 MQTT MESSAGE: [machine] → [payload]")
print(" ⚠️ MQTT DISCONNECTED: [reason]")
print()
print("🌐 API Endpoints available:")
print(" GET /system/status - Overall system status")
print(" GET /mqtt/status - MQTT client status and statistics")
print(" GET /machines - All machine states from MQTT")
print(" GET /cameras - Camera statuses")
print()
print("💡 To see live MQTT logs, run: python main.py")
if __name__ == "__main__":
main()

View File

@@ -0,0 +1,267 @@
#!/usr/bin/env python3
"""
Test script for auto-recording functionality.
This script tests the auto-recording feature by simulating MQTT state changes
and verifying that cameras start and stop recording automatically.
"""
import sys
import os
import time
import json
import requests
from datetime import datetime
# Add the parent directory to Python path
sys.path.insert(0, os.path.dirname(os.path.dirname(os.path.abspath(__file__))))
from usda_vision_system.core.config import Config
from usda_vision_system.core.state_manager import StateManager
from usda_vision_system.core.events import EventSystem, publish_machine_state_changed
class AutoRecordingTester:
"""Test class for auto-recording functionality"""
def __init__(self):
self.api_base_url = "http://localhost:8000"
self.config = Config("config.json")
self.state_manager = StateManager()
self.event_system = EventSystem()
# Test results
self.test_results = []
def log_test(self, test_name: str, success: bool, message: str = ""):
"""Log a test result"""
status = "✅ PASS" if success else "❌ FAIL"
timestamp = datetime.now().strftime("%H:%M:%S")
result = f"[{timestamp}] {status} {test_name}"
if message:
result += f" - {message}"
print(result)
self.test_results.append({
"test_name": test_name,
"success": success,
"message": message,
"timestamp": timestamp
})
def check_api_available(self) -> bool:
"""Check if the API server is available"""
try:
response = requests.get(f"{self.api_base_url}/cameras", timeout=5)
return response.status_code == 200
except Exception:
return False
def get_camera_status(self, camera_name: str) -> dict:
"""Get camera status from API"""
try:
response = requests.get(f"{self.api_base_url}/cameras", timeout=5)
if response.status_code == 200:
cameras = response.json()
return cameras.get(camera_name, {})
except Exception as e:
print(f"Error getting camera status: {e}")
return {}
def get_auto_recording_status(self) -> dict:
"""Get auto-recording manager status"""
try:
response = requests.get(f"{self.api_base_url}/auto-recording/status", timeout=5)
if response.status_code == 200:
return response.json()
except Exception as e:
print(f"Error getting auto-recording status: {e}")
return {}
def enable_auto_recording(self, camera_name: str) -> bool:
"""Enable auto-recording for a camera"""
try:
response = requests.post(f"{self.api_base_url}/cameras/{camera_name}/auto-recording/enable", timeout=5)
return response.status_code == 200
except Exception as e:
print(f"Error enabling auto-recording: {e}")
return False
def disable_auto_recording(self, camera_name: str) -> bool:
"""Disable auto-recording for a camera"""
try:
response = requests.post(f"{self.api_base_url}/cameras/{camera_name}/auto-recording/disable", timeout=5)
return response.status_code == 200
except Exception as e:
print(f"Error disabling auto-recording: {e}")
return False
def simulate_machine_state_change(self, machine_name: str, state: str):
"""Simulate a machine state change via event system"""
print(f"🔄 Simulating machine state change: {machine_name} -> {state}")
publish_machine_state_changed(machine_name, state, "test_script")
def test_api_connectivity(self) -> bool:
"""Test API connectivity"""
available = self.check_api_available()
self.log_test("API Connectivity", available,
"API server is reachable" if available else "API server is not reachable")
return available
def test_auto_recording_status(self) -> bool:
"""Test auto-recording status endpoint"""
status = self.get_auto_recording_status()
success = bool(status and "running" in status)
self.log_test("Auto-Recording Status API", success,
f"Status: {status}" if success else "Failed to get status")
return success
def test_camera_auto_recording_config(self) -> bool:
"""Test camera auto-recording configuration"""
success = True
# Test enabling auto-recording for camera1
enabled = self.enable_auto_recording("camera1")
if enabled:
self.log_test("Enable Auto-Recording (camera1)", True, "Successfully enabled")
else:
self.log_test("Enable Auto-Recording (camera1)", False, "Failed to enable")
success = False
# Check camera status
time.sleep(1)
camera_status = self.get_camera_status("camera1")
auto_enabled = camera_status.get("auto_recording_enabled", False)
self.log_test("Auto-Recording Status Check", auto_enabled,
f"Camera1 auto-recording enabled: {auto_enabled}")
if not auto_enabled:
success = False
return success
def test_machine_state_simulation(self) -> bool:
"""Test machine state change simulation"""
try:
# Test vibratory conveyor (camera1)
self.simulate_machine_state_change("vibratory_conveyor", "on")
time.sleep(2)
camera_status = self.get_camera_status("camera1")
is_recording = camera_status.get("is_recording", False)
auto_active = camera_status.get("auto_recording_active", False)
self.log_test("Machine ON -> Recording Start", is_recording,
f"Camera1 recording: {is_recording}, auto-active: {auto_active}")
# Test turning machine off
time.sleep(3)
self.simulate_machine_state_change("vibratory_conveyor", "off")
time.sleep(2)
camera_status = self.get_camera_status("camera1")
is_recording_after = camera_status.get("is_recording", False)
auto_active_after = camera_status.get("auto_recording_active", False)
self.log_test("Machine OFF -> Recording Stop", not is_recording_after,
f"Camera1 recording: {is_recording_after}, auto-active: {auto_active_after}")
return is_recording and not is_recording_after
except Exception as e:
self.log_test("Machine State Simulation", False, f"Error: {e}")
return False
def test_retry_mechanism(self) -> bool:
"""Test retry mechanism for failed recording attempts"""
# This test would require simulating camera failures
# For now, we'll just check if the retry queue is accessible
try:
status = self.get_auto_recording_status()
retry_queue = status.get("retry_queue", {})
self.log_test("Retry Queue Access", True,
f"Retry queue accessible, current items: {len(retry_queue)}")
return True
except Exception as e:
self.log_test("Retry Queue Access", False, f"Error: {e}")
return False
def run_all_tests(self):
"""Run all auto-recording tests"""
print("🧪 Starting Auto-Recording Tests")
print("=" * 50)
# Check if system is running
if not self.test_api_connectivity():
print("\n❌ Cannot run tests - API server is not available")
print("Please start the USDA Vision System first:")
print(" python main.py")
return False
# Run tests
tests = [
self.test_auto_recording_status,
self.test_camera_auto_recording_config,
self.test_machine_state_simulation,
self.test_retry_mechanism,
]
passed = 0
total = len(tests)
for test in tests:
try:
if test():
passed += 1
time.sleep(1) # Brief pause between tests
except Exception as e:
self.log_test(test.__name__, False, f"Exception: {e}")
# Print summary
print("\n" + "=" * 50)
print(f"📊 Test Summary: {passed}/{total} tests passed")
if passed == total:
print("🎉 All auto-recording tests passed!")
return True
else:
print(f"⚠️ {total - passed} test(s) failed")
return False
def cleanup(self):
"""Cleanup after tests"""
print("\n🧹 Cleaning up...")
# Disable auto-recording for test cameras
self.disable_auto_recording("camera1")
self.disable_auto_recording("camera2")
# Turn off machines
self.simulate_machine_state_change("vibratory_conveyor", "off")
self.simulate_machine_state_change("blower_separator", "off")
print("✅ Cleanup completed")
def main():
"""Main test function"""
tester = AutoRecordingTester()
try:
success = tester.run_all_tests()
return 0 if success else 1
except KeyboardInterrupt:
print("\n⚠️ Tests interrupted by user")
return 1
except Exception as e:
print(f"\n❌ Test execution failed: {e}")
return 1
finally:
tester.cleanup()
if __name__ == "__main__":
exit_code = main()
sys.exit(exit_code)

View File

@@ -0,0 +1,193 @@
#!/usr/bin/env python3
"""
Test script to verify auto-recording functionality with simulated MQTT messages.
This script tests that:
1. Auto recording manager properly handles machine state changes
2. Recording starts when machine turns "on"
3. Recording stops when machine turns "off"
4. Camera configuration from config.json is used
"""
import sys
import os
import time
import logging
from datetime import datetime
# Add the current directory to Python path
sys.path.insert(0, os.path.dirname(os.path.dirname(os.path.dirname(os.path.abspath(__file__)))))
def setup_logging():
"""Setup logging for the test"""
logging.basicConfig(level=logging.INFO, format="%(asctime)s - %(name)s - %(levelname)s - %(message)s")
def test_auto_recording_with_mqtt():
"""Test auto recording functionality with simulated MQTT messages"""
print("🧪 Testing Auto Recording with MQTT Messages")
print("=" * 50)
try:
# Import required modules
from usda_vision_system.core.config import Config
from usda_vision_system.core.state_manager import StateManager
from usda_vision_system.core.events import EventSystem, EventType
from usda_vision_system.recording.auto_manager import AutoRecordingManager
print("✅ Modules imported successfully")
# Create system components
config = Config("config.json")
state_manager = StateManager()
event_system = EventSystem()
# Create a mock camera manager for testing
class MockCameraManager:
def __init__(self):
self.recording_calls = []
self.stop_calls = []
def manual_start_recording(self, camera_name, filename, exposure_ms=None, gain=None, fps=None):
call_info = {"camera_name": camera_name, "filename": filename, "exposure_ms": exposure_ms, "gain": gain, "fps": fps, "timestamp": datetime.now()}
self.recording_calls.append(call_info)
print(f"📹 MOCK: Starting recording for {camera_name}")
print(f" - Filename: {filename}")
print(f" - Settings: exposure={exposure_ms}ms, gain={gain}, fps={fps}")
return True
def manual_stop_recording(self, camera_name):
call_info = {"camera_name": camera_name, "timestamp": datetime.now()}
self.stop_calls.append(call_info)
print(f"⏹️ MOCK: Stopping recording for {camera_name}")
return True
mock_camera_manager = MockCameraManager()
# Create auto recording manager
auto_manager = AutoRecordingManager(config, state_manager, event_system, mock_camera_manager)
print("✅ Auto recording manager created")
# Start the auto recording manager
if not auto_manager.start():
print("❌ Failed to start auto recording manager")
return False
print("✅ Auto recording manager started")
# Test 1: Simulate blower_separator turning ON (should trigger camera1)
print("\n🔄 Test 1: Blower separator turns ON")
print("📡 Publishing machine state change event...")
# Use the same event system instance that the auto manager is subscribed to
event_system.publish(EventType.MACHINE_STATE_CHANGED, "test_script", {"machine_name": "blower_separator", "state": "on", "previous_state": None})
time.sleep(1.0) # Give more time for event processing
print(f"📊 Total recording calls so far: {len(mock_camera_manager.recording_calls)}")
for call in mock_camera_manager.recording_calls:
print(f" - {call['camera_name']}: {call['filename']}")
# Check if recording was started for camera1
camera1_calls = [call for call in mock_camera_manager.recording_calls if call["camera_name"] == "camera1"]
if camera1_calls:
call = camera1_calls[-1]
print(f"✅ Camera1 recording started with config:")
print(f" - Exposure: {call['exposure_ms']}ms (expected: 0.3ms)")
print(f" - Gain: {call['gain']} (expected: 4.0)")
print(f" - FPS: {call['fps']} (expected: 0)")
# Verify settings match config.json
if call["exposure_ms"] == 0.3 and call["gain"] == 4.0 and call["fps"] == 0:
print("✅ Camera settings match config.json")
else:
print("❌ Camera settings don't match config.json")
return False
else:
print("❌ Camera1 recording was not started")
return False
# Test 2: Simulate vibratory_conveyor turning ON (should trigger camera2)
print("\n🔄 Test 2: Vibratory conveyor turns ON")
event_system.publish(EventType.MACHINE_STATE_CHANGED, "test_script", {"machine_name": "vibratory_conveyor", "state": "on", "previous_state": None})
time.sleep(0.5)
# Check if recording was started for camera2
camera2_calls = [call for call in mock_camera_manager.recording_calls if call["camera_name"] == "camera2"]
if camera2_calls:
call = camera2_calls[-1]
print(f"✅ Camera2 recording started with config:")
print(f" - Exposure: {call['exposure_ms']}ms (expected: 0.2ms)")
print(f" - Gain: {call['gain']} (expected: 2.0)")
print(f" - FPS: {call['fps']} (expected: 0)")
# Verify settings match config.json
if call["exposure_ms"] == 0.2 and call["gain"] == 2.0 and call["fps"] == 0:
print("✅ Camera settings match config.json")
else:
print("❌ Camera settings don't match config.json")
return False
else:
print("❌ Camera2 recording was not started")
return False
# Test 3: Simulate machines turning OFF
print("\n🔄 Test 3: Machines turn OFF")
event_system.publish(EventType.MACHINE_STATE_CHANGED, "test_script", {"machine_name": "blower_separator", "state": "off", "previous_state": None})
event_system.publish(EventType.MACHINE_STATE_CHANGED, "test_script", {"machine_name": "vibratory_conveyor", "state": "off", "previous_state": None})
time.sleep(0.5)
# Check if recordings were stopped
camera1_stops = [call for call in mock_camera_manager.stop_calls if call["camera_name"] == "camera1"]
camera2_stops = [call for call in mock_camera_manager.stop_calls if call["camera_name"] == "camera2"]
if camera1_stops and camera2_stops:
print("✅ Both cameras stopped recording when machines turned OFF")
else:
print(f"❌ Recording stop failed - Camera1 stops: {len(camera1_stops)}, Camera2 stops: {len(camera2_stops)}")
return False
# Stop the auto recording manager
auto_manager.stop()
print("✅ Auto recording manager stopped")
print("\n🎉 All auto recording tests passed!")
print("\n📊 Summary:")
print(f" - Total recording starts: {len(mock_camera_manager.recording_calls)}")
print(f" - Total recording stops: {len(mock_camera_manager.stop_calls)}")
print(f" - Camera1 starts: {len([c for c in mock_camera_manager.recording_calls if c['camera_name'] == 'camera1'])}")
print(f" - Camera2 starts: {len([c for c in mock_camera_manager.recording_calls if c['camera_name'] == 'camera2'])}")
return True
except Exception as e:
print(f"❌ Test failed with error: {e}")
import traceback
traceback.print_exc()
return False
def main():
"""Run the auto recording test"""
setup_logging()
success = test_auto_recording_with_mqtt()
if success:
print("\n✅ Auto recording functionality is working correctly!")
print("\n📝 The system should now properly:")
print(" 1. Start recording when machines turn ON")
print(" 2. Stop recording when machines turn OFF")
print(" 3. Use camera settings from config.json")
print(" 4. Generate appropriate filenames with timestamps")
else:
print("\n❌ Auto recording test failed!")
print("Please check the implementation and try again.")
return success
if __name__ == "__main__":
success = main()
sys.exit(0 if success else 1)

View File

@@ -0,0 +1,214 @@
#!/usr/bin/env python3
"""
Simple test script for auto-recording functionality.
This script performs basic checks to verify that the auto-recording feature
is properly integrated and configured.
"""
import sys
import os
import json
import time
# Add the current directory to Python path
sys.path.insert(0, os.path.dirname(os.path.abspath(__file__)))
def test_config_structure():
"""Test that config.json has the required auto-recording fields"""
print("🔍 Testing configuration structure...")
try:
with open("config.json", "r") as f:
config = json.load(f)
# Check system-level auto-recording setting
system_config = config.get("system", {})
if "auto_recording_enabled" not in system_config:
print("❌ Missing 'auto_recording_enabled' in system config")
return False
print(f"✅ System auto-recording enabled: {system_config['auto_recording_enabled']}")
# Check camera-level auto-recording settings
cameras = config.get("cameras", [])
if not cameras:
print("❌ No cameras found in config")
return False
for camera in cameras:
camera_name = camera.get("name", "unknown")
required_fields = ["auto_start_recording_enabled", "auto_recording_max_retries", "auto_recording_retry_delay_seconds"]
missing_fields = [field for field in required_fields if field not in camera]
if missing_fields:
print(f"❌ Camera {camera_name} missing fields: {missing_fields}")
return False
print(f"✅ Camera {camera_name} auto-recording config:")
print(f" - Enabled: {camera['auto_start_recording_enabled']}")
print(f" - Max retries: {camera['auto_recording_max_retries']}")
print(f" - Retry delay: {camera['auto_recording_retry_delay_seconds']}s")
print(f" - Machine topic: {camera.get('machine_topic', 'unknown')}")
return True
except Exception as e:
print(f"❌ Error reading config: {e}")
return False
def test_module_imports():
"""Test that all required modules can be imported"""
print("\n🔍 Testing module imports...")
try:
from usda_vision_system.recording.auto_manager import AutoRecordingManager
print("✅ AutoRecordingManager imported successfully")
from usda_vision_system.core.config import Config
config = Config("config.json")
print("✅ Config loaded successfully")
from usda_vision_system.core.state_manager import StateManager
state_manager = StateManager()
print("✅ StateManager created successfully")
from usda_vision_system.core.events import EventSystem
event_system = EventSystem()
print("✅ EventSystem created successfully")
# Test creating AutoRecordingManager (without camera_manager for now)
auto_manager = AutoRecordingManager(config, state_manager, event_system, None)
print("✅ AutoRecordingManager created successfully")
return True
except Exception as e:
print(f"❌ Import error: {e}")
return False
def test_camera_mapping():
"""Test camera to machine topic mapping"""
print("\n🔍 Testing camera to machine mapping...")
try:
with open("config.json", "r") as f:
config = json.load(f)
cameras = config.get("cameras", [])
expected_mappings = {"camera1": "blower_separator", "camera2": "vibratory_conveyor"} # Blower separator # Conveyor/cracker cam
for camera in cameras:
camera_name = camera.get("name")
machine_topic = camera.get("machine_topic")
if camera_name in expected_mappings:
expected_topic = expected_mappings[camera_name]
if machine_topic == expected_topic:
print(f"{camera_name} correctly mapped to {machine_topic}")
else:
print(f"{camera_name} mapped to {machine_topic}, expected {expected_topic}")
return False
else:
print(f"⚠️ Unknown camera: {camera_name}")
return True
except Exception as e:
print(f"❌ Error checking mappings: {e}")
return False
def test_api_models():
"""Test that API models include auto-recording fields"""
print("\n🔍 Testing API models...")
try:
from usda_vision_system.api.models import CameraStatusResponse, CameraConfigResponse, AutoRecordingConfigRequest, AutoRecordingConfigResponse, AutoRecordingStatusResponse
# Check CameraStatusResponse has auto-recording fields
camera_response = CameraStatusResponse(name="test", status="available", is_recording=False, last_checked="2024-01-01T00:00:00", auto_recording_enabled=True, auto_recording_active=False, auto_recording_failure_count=0)
print("✅ CameraStatusResponse includes auto-recording fields")
# Check CameraConfigResponse has auto-recording fields
config_response = CameraConfigResponse(
name="test",
machine_topic="test_topic",
storage_path="/test",
enabled=True,
auto_start_recording_enabled=True,
auto_recording_max_retries=3,
auto_recording_retry_delay_seconds=5,
exposure_ms=1.0,
gain=1.0,
target_fps=30.0,
sharpness=100,
contrast=100,
saturation=100,
gamma=100,
noise_filter_enabled=False,
denoise_3d_enabled=False,
auto_white_balance=True,
color_temperature_preset=0,
wb_red_gain=1.0,
wb_green_gain=1.0,
wb_blue_gain=1.0,
anti_flicker_enabled=False,
light_frequency=1,
bit_depth=8,
hdr_enabled=False,
hdr_gain_mode=0,
)
print("✅ CameraConfigResponse includes auto-recording fields")
print("✅ All auto-recording API models available")
return True
except Exception as e:
print(f"❌ API model error: {e}")
return False
def main():
"""Run all basic tests"""
print("🧪 Auto-Recording Integration Test")
print("=" * 40)
tests = [test_config_structure, test_module_imports, test_camera_mapping, test_api_models]
passed = 0
total = len(tests)
for test in tests:
try:
if test():
passed += 1
except Exception as e:
print(f"❌ Test {test.__name__} failed with exception: {e}")
print("\n" + "=" * 40)
print(f"📊 Results: {passed}/{total} tests passed")
if passed == total:
print("🎉 All integration tests passed!")
print("\n📝 Next steps:")
print("1. Start the system: python main.py")
print("2. Run full tests: python tests/test_auto_recording.py")
print("3. Test with MQTT messages to trigger auto-recording")
return True
else:
print(f"⚠️ {total - passed} test(s) failed")
print("Please fix the issues before running the full system")
return False
if __name__ == "__main__":
success = main()
sys.exit(0 if success else 1)

Some files were not shown because too many files have changed in this diff Show More