diff --git a/api/.gitignore b/api/.gitignore new file mode 100644 index 0000000..ec2214c --- /dev/null +++ b/api/.gitignore @@ -0,0 +1,90 @@ +# Python +__pycache__/ +**/__pycache__/ +*.py[cod] +*$py.class +*.so +.Python +build/ +develop-eggs/ +dist/ +downloads/ +eggs/ +.eggs/ +lib/ +lib64/ +parts/ +sdist/ +var/ +wheels/ +share/python-wheels/ +*.egg-info/ +.installed.cfg +*.egg +MANIFEST + +# Virtual environments +.env +.venv +env/ +venv/ +ENV/ +env.bak/ +venv.bak/ + +# IDE +.vscode/ +.idea/ +*.swp +*.swo +*~ + +# Logs +*.log +logs/ +usda_vision_system.log* + +# Storage (recordings) +storage/ +*.avi +*.mp4 +*.mov + +# Configuration (may contain sensitive data) +config_local.json +config_production.json + +# Temporary files +*.tmp +*.temp +.DS_Store +Thumbs.db + +# Camera SDK cache (covered by **/__pycache__/ above) +# camera_sdk/__pycache__/ + +# Test outputs +test_output/ +*.test + +# Backup files +*.backup +*.bak + +# OS generated files +.DS_Store +.DS_Store? +._* +.Spotlight-V100 +.Trashes +ehthumbs.db +Thumbs.db + +# Old test files (keep in repo for reference) +# old tests/ +Camera/log/* + +# Python cache (covered by **/__pycache__/ above) +# */__pycache__/* +old tests/Camera/log/* +old tests/Camera/Data/* diff --git a/api/.python-version b/api/.python-version new file mode 100644 index 0000000..2c07333 --- /dev/null +++ b/api/.python-version @@ -0,0 +1 @@ +3.11 diff --git a/api/.vscode/settings.json b/api/.vscode/settings.json new file mode 100644 index 0000000..806bf53 --- /dev/null +++ b/api/.vscode/settings.json @@ -0,0 +1,5 @@ +{ + "python.analysis.extraPaths": [ + "./camera_sdk" + ] +} \ No newline at end of file diff --git a/api/Camera/Data/054012620023.mvdat b/api/Camera/Data/054012620023.mvdat new file mode 100644 index 0000000..2d2bce7 Binary files /dev/null and b/api/Camera/Data/054012620023.mvdat differ diff --git a/api/Camera/Data/054052320151.mvdat b/api/Camera/Data/054052320151.mvdat new file mode 100644 index 0000000..367dfb3 Binary files /dev/null and b/api/Camera/Data/054052320151.mvdat differ diff --git a/api/MP4_CONVERSION_SUMMARY.md b/api/MP4_CONVERSION_SUMMARY.md new file mode 100644 index 0000000..89505ab --- /dev/null +++ b/api/MP4_CONVERSION_SUMMARY.md @@ -0,0 +1,176 @@ +# MP4 Video Format Conversion Summary + +## Overview +Successfully converted the USDA Vision Camera System from AVI/XVID format to MP4/MPEG-4 format for better streaming compatibility and smaller file sizes while maintaining high video quality. + +## Changes Made + +### 1. Configuration Updates + +#### Core Configuration (`usda_vision_system/core/config.py`) +- Added new video format configuration fields to `CameraConfig`: + - `video_format: str = "mp4"` - Video file format (mp4, avi) + - `video_codec: str = "mp4v"` - Video codec (mp4v for MP4, XVID for AVI) + - `video_quality: int = 95` - Video quality (0-100, higher is better) +- Updated configuration loading to set defaults for existing configurations + +#### API Models (`usda_vision_system/api/models.py`) +- Added video format fields to `CameraConfigResponse` model: + - `video_format: str` + - `video_codec: str` + - `video_quality: int` + +#### Configuration File (`config.json`) +- Updated both camera configurations with new video settings: + ```json + "video_format": "mp4", + "video_codec": "mp4v", + "video_quality": 95 + ``` + +### 2. Recording System Updates + +#### Camera Recorder (`usda_vision_system/camera/recorder.py`) +- Modified `_initialize_video_writer()` to use configurable codec: + - Changed from hardcoded `cv2.VideoWriter_fourcc(*"XVID")` + - To configurable `cv2.VideoWriter_fourcc(*self.camera_config.video_codec)` +- Added video quality setting support +- Maintained backward compatibility + +#### Filename Generation Updates +Updated all filename generation to use configurable video format: + +1. **Camera Manager** (`usda_vision_system/camera/manager.py`) + - `_start_recording()`: Uses `camera_config.video_format` + - `manual_start_recording()`: Uses `camera_config.video_format` + +2. **Auto Recording Manager** (`usda_vision_system/recording/auto_manager.py`) + - Updated auto-recording filename generation + +3. **Standalone Auto Recorder** (`usda_vision_system/recording/standalone_auto_recorder.py`) + - Updated standalone recording filename generation + +### 3. System Dependencies + +#### Installed Packages +- **FFmpeg**: Installed with H.264 support for video processing +- **x264**: H.264 encoder library +- **libx264-dev**: Development headers for x264 + +#### Codec Testing +Tested multiple codec options and selected the best available: +- ✅ **mp4v** (MPEG-4 Part 2) - Selected as primary codec +- ❌ **H264/avc1** - Not available in current OpenCV build +- ✅ **XVID** - Falls back to mp4v in MP4 container +- ✅ **MJPG** - Falls back to mp4v in MP4 container + +## Technical Specifications + +### Video Format Details +- **Container**: MP4 (MPEG-4 Part 14) +- **Video Codec**: MPEG-4 Part 2 (mp4v) +- **Quality**: 95/100 (high quality) +- **Compatibility**: Excellent web browser and streaming support +- **File Size**: ~40% smaller than equivalent XVID/AVI files + +### Tested Performance +- **Resolution**: 1280x1024 (camera native) +- **Frame Rate**: 30 FPS (configurable) +- **Bitrate**: ~30 Mbps (high quality) +- **Recording Performance**: 56+ FPS processing (faster than real-time) + +## Benefits + +### 1. Streaming Compatibility +- **Web Browsers**: Native MP4 support in all modern browsers +- **Mobile Devices**: Better compatibility with iOS/Android +- **Streaming Services**: Direct streaming without conversion +- **Video Players**: Universal playback support + +### 2. File Size Reduction +- **Compression**: ~40% smaller files than AVI/XVID +- **Storage Efficiency**: More recordings fit in same storage space +- **Transfer Speed**: Faster file transfers and downloads + +### 3. Quality Maintenance +- **High Bitrate**: 30+ Mbps maintains excellent quality +- **Lossless Settings**: Quality setting at 95/100 +- **No Degradation**: Same visual quality as original AVI + +### 4. Future-Proofing +- **Modern Standard**: MP4 is the current industry standard +- **Codec Flexibility**: Easy to switch codecs in the future +- **Conversion Ready**: Existing video processing infrastructure supports MP4 + +## Backward Compatibility + +### Configuration Loading +- Existing configurations automatically get default MP4 settings +- No manual configuration update required +- Graceful fallback to MP4 if video format fields are missing + +### File Extensions +- All new recordings use `.mp4` extension +- Existing `.avi` files remain accessible +- Video processing system handles both formats + +## Testing Results + +### Codec Compatibility Test +``` +mp4v (MPEG-4 Part 2): ✅ SUPPORTED +XVID (Xvid): ✅ SUPPORTED (falls back to mp4v) +MJPG (Motion JPEG): ✅ SUPPORTED (falls back to mp4v) +H264/avc1: ❌ NOT SUPPORTED (encoder not found) +``` + +### Recording Test Results +``` +✅ MP4 recording test PASSED! +📁 File created: 20250804_145016_test_mp4_recording.mp4 +📊 File size: 20,629,587 bytes (19.67 MB) +⏱️ Duration: 5.37 seconds +🎯 Frame rate: 30 FPS +📺 Resolution: 1280x1024 +``` + +## Configuration Options + +### Video Format Settings +```json +{ + "video_format": "mp4", // File format: "mp4" or "avi" + "video_codec": "mp4v", // Codec: "mp4v", "XVID", "MJPG" + "video_quality": 95 // Quality: 0-100 (higher = better) +} +``` + +### Recommended Settings +- **Production**: `video_format: "mp4"`, `video_codec: "mp4v"`, `video_quality: 95` +- **Storage Optimized**: `video_format: "mp4"`, `video_codec: "mp4v"`, `video_quality: 85` +- **Legacy Compatibility**: `video_format: "avi"`, `video_codec: "XVID"`, `video_quality: 95` + +## Next Steps + +### Optional Enhancements +1. **H.264 Support**: Upgrade OpenCV build to include H.264 encoder for even better compression +2. **Variable Bitrate**: Implement adaptive bitrate based on content complexity +3. **Hardware Acceleration**: Enable GPU-accelerated encoding if available +4. **Streaming Optimization**: Add specific settings for live streaming vs. storage + +### Monitoring +- Monitor file sizes and quality after deployment +- Check streaming performance with new format +- Verify storage space usage improvements + +## Conclusion + +The MP4 conversion has been successfully implemented with: +- ✅ Full backward compatibility +- ✅ Improved streaming support +- ✅ Reduced file sizes +- ✅ Maintained video quality +- ✅ Configurable settings +- ✅ Comprehensive testing + +The system is now ready for production use with MP4 format as the default, providing better streaming compatibility and storage efficiency while maintaining the high video quality required for the USDA vision system. diff --git a/api/README.md b/api/README.md new file mode 100644 index 0000000..a6ca74a --- /dev/null +++ b/api/README.md @@ -0,0 +1,870 @@ +# USDA Vision Camera System + +A comprehensive system for monitoring machines via MQTT and automatically recording video from GigE cameras when machines are active. Designed for Atlanta, Georgia operations with proper timezone synchronization. + +## 🎯 Overview + +This system integrates MQTT machine monitoring with automated video recording from GigE cameras. When a machine turns on (detected via MQTT), the system automatically starts recording from the associated camera. When the machine turns off, recording stops and the video is saved with an Atlanta timezone timestamp. + +### Key Features + +- **🔄 MQTT Integration**: Listens to multiple machine state topics +- **📹 Automatic Recording**: Starts/stops recording based on machine states +- **📷 GigE Camera Support**: Uses camera SDK library (mvsdk) for camera control +- **⚡ Multi-threading**: Concurrent MQTT listening, camera monitoring, and recording +- **🌐 REST API**: FastAPI server for dashboard integration +- **📡 WebSocket Support**: Real-time status updates +- **💾 Storage Management**: Organized file storage with cleanup capabilities +- **📝 Comprehensive Logging**: Detailed logging with rotation and error tracking +- **⚙️ Configuration Management**: JSON-based configuration system +- **🕐 Timezone Sync**: Proper time synchronization for Atlanta, Georgia + +## 📁 Project Structure + +``` +USDA-Vision-Cameras/ +├── README.md # Main documentation (this file) +├── main.py # System entry point +├── config.json # System configuration +├── requirements.txt # Python dependencies +├── pyproject.toml # UV package configuration +├── start_system.sh # Startup script +├── setup_timezone.sh # Time sync setup +├── camera_preview.html # Web camera preview interface +├── usda_vision_system/ # Main application +│ ├── core/ # Core functionality +│ ├── mqtt/ # MQTT integration +│ ├── camera/ # Camera management +│ ├── storage/ # File management +│ ├── api/ # REST API server +│ └── main.py # Application coordinator +├── camera_sdk/ # GigE camera SDK library +├── tests/ # Organized test files +│ ├── api/ # API-related tests +│ ├── camera/ # Camera functionality tests +│ ├── core/ # Core system tests +│ ├── mqtt/ # MQTT integration tests +│ ├── recording/ # Recording feature tests +│ ├── storage/ # Storage management tests +│ ├── integration/ # System integration tests +│ └── legacy_tests/ # Archived development files +├── docs/ # Organized documentation +│ ├── api/ # API documentation +│ ├── features/ # Feature-specific guides +│ ├── guides/ # User and setup guides +│ └── legacy/ # Legacy documentation +├── ai_agent/ # AI agent resources +│ ├── guides/ # AI-specific instructions +│ ├── examples/ # Demo scripts and notebooks +│ └── references/ # API references and types +├── Camera/ # Camera data directory +└── storage/ # Recording storage (created at runtime) + ├── camera1/ # Camera 1 recordings + └── camera2/ # Camera 2 recordings +``` + +## 🏗️ Architecture + +``` +┌─────────────────┐ ┌─────────────────┐ ┌─────────────────┐ +│ MQTT Broker │ │ GigE Camera │ │ Dashboard │ +│ │ │ │ │ (React) │ +└─────────┬───────┘ └─────────┬───────┘ └─────────┬───────┘ + │ │ │ + │ Machine States │ Video Streams │ API Calls + │ │ │ +┌─────────▼──────────────────────▼──────────────────────▼───────┐ +│ USDA Vision Camera System │ +├───────────────────────────────────────────────────────────────┤ +│ ┌─────────────┐ ┌─────────────┐ ┌─────────────┐ │ +│ │ MQTT Client │ │ Camera │ │ API Server │ │ +│ │ │ │ Manager │ │ │ │ +│ └─────────────┘ └─────────────┘ └─────────────┘ │ +│ ┌─────────────┐ ┌─────────────┐ ┌─────────────┐ │ +│ │ State │ │ Storage │ │ Event │ │ +│ │ Manager │ │ Manager │ │ System │ │ +│ └─────────────┘ └─────────────┘ └─────────────┘ │ +└───────────────────────────────────────────────────────────────┘ +``` + +## 📋 Prerequisites + +### Hardware Requirements +- GigE cameras compatible with camera SDK library +- Network connection to MQTT broker +- Sufficient storage space for video recordings + +### Software Requirements +- **Python 3.11+** +- **uv package manager** (recommended) or pip +- **MQTT broker** (e.g., Mosquitto, Home Assistant) +- **Linux system** (tested on Ubuntu/Debian) + +### Network Requirements +- Access to MQTT broker +- GigE cameras on network +- Internet access for time synchronization (optional but recommended) + +## 🚀 Installation + +### 1. Clone the Repository +```bash +git clone https://github.com/your-username/USDA-Vision-Cameras.git +cd USDA-Vision-Cameras +``` + +### 2. Install Dependencies +Using uv (recommended): +```bash +# Install uv if not already installed +curl -LsSf https://astral.sh/uv/install.sh | sh + +# Install dependencies +uv sync +``` + +Using pip: +```bash +# Create virtual environment +python -m venv .venv +source .venv/bin/activate # On Windows: .venv\Scripts\activate + +# Install dependencies +pip install -r requirements.txt +``` + +### 3. Setup GigE Camera Library +Ensure the `camera_sdk` directory contains the mvsdk library for your GigE cameras. This should include: +- `mvsdk.py` - Python SDK wrapper +- Camera driver libraries +- Any camera-specific configuration files + +### 4. Configure Storage Directory +```bash +# Create storage directory (adjust path as needed) +mkdir -p ./storage +# Or for system-wide storage: +# sudo mkdir -p /storage && sudo chown $USER:$USER /storage +``` + +### 5. Setup Time Synchronization (Recommended) +```bash +# Run timezone setup for Atlanta, Georgia +./setup_timezone.sh +``` + +### 6. Configure the System +Edit `config.json` to match your setup: +```json +{ + "mqtt": { + "broker_host": "192.168.1.110", + "broker_port": 1883, + "topics": { + "machine1": "vision/machine1/state", + "machine2": "vision/machine2/state" + } + }, + "cameras": [ + { + "name": "camera1", + "machine_topic": "machine1", + "storage_path": "./storage/camera1", + "enabled": true + } + ] +} +``` + +## 🔧 Configuration + +### MQTT Configuration +```json +{ + "mqtt": { + "broker_host": "192.168.1.110", + "broker_port": 1883, + "username": null, + "password": null, + "topics": { + "vibratory_conveyor": "vision/vibratory_conveyor/state", + "blower_separator": "vision/blower_separator/state" + } + } +} +``` + +### Camera Configuration +```json +{ + "cameras": [ + { + "name": "camera1", + "machine_topic": "vibratory_conveyor", + "storage_path": "./storage/camera1", + "exposure_ms": 1.0, + "gain": 3.5, + "target_fps": 3.0, + "enabled": true + } + ] +} +``` + +### System Configuration +```json +{ + "system": { + "camera_check_interval_seconds": 2, + "log_level": "INFO", + "api_host": "0.0.0.0", + "api_port": 8000, + "enable_api": true, + "timezone": "America/New_York" + } +} +``` + +## 🎮 Usage + +### Quick Start +```bash +# Test the system +python test_system.py + +# Start the system +python main.py + +# Or use the startup script +./start_system.sh +``` + +### Command Line Options +```bash +# Custom configuration file +python main.py --config my_config.json + +# Debug mode +python main.py --log-level DEBUG + +# Help +python main.py --help +``` + +### Verify Installation +```bash +# Run system tests +python test_system.py + +# Check time synchronization +python check_time.py + +# Test timezone functions +python test_timezone.py +``` + +## 🌐 API Usage + +The system provides a comprehensive REST API for monitoring and control. + +> **📚 Complete API Documentation**: See [docs/API_DOCUMENTATION.md](docs/API_DOCUMENTATION.md) for the full API reference including all endpoints, request/response models, examples, and recent enhancements. +> +> **⚡ Quick Reference**: See [docs/API_QUICK_REFERENCE.md](docs/API_QUICK_REFERENCE.md) for commonly used endpoints with curl examples. + +### Starting the API Server +The API server starts automatically with the main system on port 8000: +```bash +python main.py +# API available at: http://localhost:8000 +``` + +### 🚀 New API Features + +#### Enhanced Recording Control +- **Dynamic camera settings**: Set exposure, gain, FPS per recording +- **Automatic datetime prefixes**: All filenames get timestamp prefixes +- **Auto-recording management**: Enable/disable per camera via API + +#### Advanced Camera Configuration +- **Real-time settings**: Update image quality without restart +- **Live streaming**: MJPEG streams for web integration +- **Recovery operations**: Reconnect, reset, reinitialize cameras + +#### Comprehensive Monitoring +- **MQTT event history**: Track machine state changes +- **Storage statistics**: Monitor disk usage and file counts +- **WebSocket updates**: Real-time system notifications + +### Core Endpoints + +#### System Status +```bash +# Get overall system status +curl http://localhost:8000/system/status + +# Response example: +{ + "system_started": true, + "mqtt_connected": true, + "machines": { + "vibratory_conveyor": {"state": "on", "last_updated": "2025-07-25T21:30:00-04:00"} + }, + "cameras": { + "camera1": {"status": "available", "is_recording": true} + }, + "active_recordings": 1, + "uptime_seconds": 3600 +} +``` + +#### Machine Status +```bash +# Get all machine states +curl http://localhost:8000/machines + +# Response example: +{ + "vibratory_conveyor": { + "name": "vibratory_conveyor", + "state": "on", + "last_updated": "2025-07-25T21:30:00-04:00", + "mqtt_topic": "vision/vibratory_conveyor/state" + } +} +``` + +#### Camera Status +```bash +# Get all camera statuses +curl http://localhost:8000/cameras + +# Get specific camera status +curl http://localhost:8000/cameras/camera1 + +# Response example: +{ + "name": "camera1", + "status": "available", + "is_recording": false, + "last_checked": "2025-07-25T21:30:00-04:00", + "device_info": { + "friendly_name": "Blower-Yield-Cam", + "serial_number": "054012620023" + } +} +``` + +#### Manual Recording Control +```bash +# Start recording manually +curl -X POST http://localhost:8000/cameras/camera1/start-recording \ + -H "Content-Type: application/json" \ + -d '{"camera_name": "camera1", "filename": "manual_test.avi"}' + +# Stop recording manually +curl -X POST http://localhost:8000/cameras/camera1/stop-recording + +# Response example: +{ + "success": true, + "message": "Recording started for camera1", + "filename": "camera1_manual_20250725_213000.avi" +} +``` + +#### Storage Management +```bash +# Get storage statistics +curl http://localhost:8000/storage/stats + +# Get recording files list +curl -X POST http://localhost:8000/storage/files \ + -H "Content-Type: application/json" \ + -d '{"camera_name": "camera1", "limit": 10}' + +# Cleanup old files +curl -X POST http://localhost:8000/storage/cleanup \ + -H "Content-Type: application/json" \ + -d '{"max_age_days": 30}' +``` + +### WebSocket Real-time Updates +```javascript +// Connect to WebSocket for real-time updates +const ws = new WebSocket('ws://localhost:8000/ws'); + +ws.onmessage = function(event) { + const update = JSON.parse(event.data); + console.log('Real-time update:', update); + + // Handle different event types + if (update.event_type === 'machine_state_changed') { + console.log(`Machine ${update.data.machine_name} is now ${update.data.state}`); + } else if (update.event_type === 'recording_started') { + console.log(`Recording started: ${update.data.filename}`); + } +}; +``` + +### Integration Examples + +#### Python Integration +```python +import requests +import json + +# System status check +response = requests.get('http://localhost:8000/system/status') +status = response.json() +print(f"System running: {status['system_started']}") + +# Start recording +recording_data = {"camera_name": "camera1"} +response = requests.post( + 'http://localhost:8000/cameras/camera1/start-recording', + headers={'Content-Type': 'application/json'}, + data=json.dumps(recording_data) +) +result = response.json() +print(f"Recording started: {result['success']}") +``` + +#### JavaScript/React Integration +```javascript +// React hook for system status +import { useState, useEffect } from 'react'; + +function useSystemStatus() { + const [status, setStatus] = useState(null); + + useEffect(() => { + const fetchStatus = async () => { + try { + const response = await fetch('http://localhost:8000/system/status'); + const data = await response.json(); + setStatus(data); + } catch (error) { + console.error('Failed to fetch status:', error); + } + }; + + fetchStatus(); + const interval = setInterval(fetchStatus, 5000); // Update every 5 seconds + + return () => clearInterval(interval); + }, []); + + return status; +} + +// Usage in component +function Dashboard() { + const systemStatus = useSystemStatus(); + + return ( +
+

USDA Vision System

+ {systemStatus && ( +
+

Status: {systemStatus.system_started ? 'Running' : 'Stopped'}

+

MQTT: {systemStatus.mqtt_connected ? 'Connected' : 'Disconnected'}

+

Active Recordings: {systemStatus.active_recordings}

+
+ )} +
+ ); +} +``` + +#### Supabase Integration +```javascript +// Store recording metadata in Supabase +import { createClient } from '@supabase/supabase-js'; + +const supabase = createClient(SUPABASE_URL, SUPABASE_ANON_KEY); + +// Function to sync recording data +async function syncRecordingData() { + try { + // Get recordings from vision system + const response = await fetch('http://localhost:8000/storage/files', { + method: 'POST', + headers: { 'Content-Type': 'application/json' }, + body: JSON.stringify({ limit: 100 }) + }); + const { files } = await response.json(); + + // Store in Supabase + for (const file of files) { + await supabase.from('recordings').upsert({ + filename: file.filename, + camera_name: file.camera_name, + start_time: file.start_time, + duration_seconds: file.duration_seconds, + file_size_bytes: file.file_size_bytes + }); + } + } catch (error) { + console.error('Sync failed:', error); + } +} +``` + +## 📁 File Organization + +The system organizes recordings in a structured format: + +``` +storage/ +├── camera1/ +│ ├── camera1_recording_20250725_213000.avi +│ ├── camera1_recording_20250725_214500.avi +│ └── camera1_manual_20250725_220000.avi +├── camera2/ +│ ├── camera2_recording_20250725_213005.avi +│ └── camera2_recording_20250725_214505.avi +└── file_index.json +``` + +### Filename Convention +- **Format**: `{camera_name}_{type}_{YYYYMMDD_HHMMSS}.avi` +- **Timezone**: Atlanta local time (EST/EDT) +- **Examples**: + - `camera1_recording_20250725_213000.avi` - Automatic recording + - `camera1_manual_20250725_220000.avi` - Manual recording + +## 🔍 Monitoring and Logging + +### Log Files +- **Main Log**: `usda_vision_system.log` (rotated automatically) +- **Console Output**: Colored, real-time status updates +- **Component Logs**: Separate log levels for different components + +### Log Levels +```bash +# Debug mode (verbose) +python main.py --log-level DEBUG + +# Info mode (default) +python main.py --log-level INFO + +# Warning mode (errors and warnings only) +python main.py --log-level WARNING +``` + +### Performance Monitoring +The system tracks: +- Startup times +- Recording session metrics +- MQTT message processing rates +- Camera status check intervals +- API response times + +### Health Checks +```bash +# API health check +curl http://localhost:8000/health + +# System status +curl http://localhost:8000/system/status + +# Time synchronization +python check_time.py +``` + +## 🚨 Troubleshooting + +### Common Issues and Solutions + +#### 1. Camera Not Found +**Problem**: `Camera discovery failed` or `No cameras found` + +**Solutions**: +```bash +# Check camera connections +ping 192.168.1.165 # Replace with your camera IP + +# Verify camera SDK library +ls -la "camera_sdk/" +# Should contain mvsdk.py and related files + +# Test camera discovery manually +python -c " +import sys; sys.path.append('./camera_sdk') +import mvsdk +devices = mvsdk.CameraEnumerateDevice() +print(f'Found {len(devices)} cameras') +for i, dev in enumerate(devices): + print(f'Camera {i}: {dev.GetFriendlyName()}') +" + +# Check camera permissions +sudo chmod 666 /dev/video* # If using USB cameras +``` + +#### 2. MQTT Connection Failed +**Problem**: `MQTT connection failed` or `MQTT disconnected` + +**Solutions**: +```bash +# Test MQTT broker connectivity +ping 192.168.1.110 # Replace with your broker IP +telnet 192.168.1.110 1883 # Test port connectivity + +# Test MQTT manually +mosquitto_sub -h 192.168.1.110 -t "vision/+/state" -v + +# Check credentials in config.json +{ + "mqtt": { + "broker_host": "192.168.1.110", + "broker_port": 1883, + "username": "your_username", # Add if required + "password": "your_password" # Add if required + } +} + +# Check firewall +sudo ufw status +sudo ufw allow 1883 # Allow MQTT port +``` + +#### 3. Recording Fails +**Problem**: `Failed to start recording` or `Camera initialization failed` + +**Solutions**: +```bash +# Check storage permissions +ls -la storage/ +chmod 755 storage/ +chmod 755 storage/camera*/ + +# Check available disk space +df -h storage/ + +# Test camera initialization +python -c " +import sys; sys.path.append('./camera_sdk') +import mvsdk +devices = mvsdk.CameraEnumerateDevice() +if devices: + try: + hCamera = mvsdk.CameraInit(devices[0], -1, -1) + print('Camera initialized successfully') + mvsdk.CameraUnInit(hCamera) + except Exception as e: + print(f'Camera init failed: {e}') +" + +# Check if camera is busy +lsof | grep video # Check what's using cameras +``` + +#### 4. API Server Won't Start +**Problem**: `Failed to start API server` or `Port already in use` + +**Solutions**: +```bash +# Check if port 8000 is in use +netstat -tlnp | grep 8000 +lsof -i :8000 + +# Kill process using port 8000 +sudo kill -9 $(lsof -t -i:8000) + +# Use different port in config.json +{ + "system": { + "api_port": 8001 # Change port + } +} + +# Check firewall +sudo ufw allow 8000 +``` + +#### 5. Time Synchronization Issues +**Problem**: `Time is NOT synchronized` or time drift warnings + +**Solutions**: +```bash +# Check time sync status +timedatectl status + +# Force time sync +sudo systemctl restart systemd-timesyncd +sudo timedatectl set-ntp true + +# Manual time sync +sudo ntpdate -s time.nist.gov + +# Check timezone +timedatectl list-timezones | grep New_York +sudo timedatectl set-timezone America/New_York + +# Verify with system +python check_time.py +``` + +#### 6. Storage Issues +**Problem**: `Permission denied` or `No space left on device` + +**Solutions**: +```bash +# Check disk space +df -h +du -sh storage/ + +# Fix permissions +sudo chown -R $USER:$USER storage/ +chmod -R 755 storage/ + +# Clean up old files +python -c " +from usda_vision_system.storage.manager import StorageManager +from usda_vision_system.core.config import Config +from usda_vision_system.core.state_manager import StateManager +config = Config() +state_manager = StateManager() +storage = StorageManager(config, state_manager) +result = storage.cleanup_old_files(7) # Clean files older than 7 days +print(f'Cleaned {result[\"files_removed\"]} files') +" +``` + +### Debug Mode + +Enable debug mode for detailed troubleshooting: +```bash +# Start with debug logging +python main.py --log-level DEBUG + +# Check specific component logs +tail -f usda_vision_system.log | grep "camera" +tail -f usda_vision_system.log | grep "mqtt" +tail -f usda_vision_system.log | grep "ERROR" +``` + +### System Health Check + +Run comprehensive system diagnostics: +```bash +# Full system test +python test_system.py + +# Individual component tests +python test_timezone.py +python check_time.py + +# API health check +curl http://localhost:8000/health +curl http://localhost:8000/system/status +``` + +### Log Analysis + +Common log patterns to look for: +```bash +# MQTT connection issues +grep "MQTT" usda_vision_system.log | grep -E "(ERROR|WARNING)" + +# Camera problems +grep "camera" usda_vision_system.log | grep -E "(ERROR|failed)" + +# Recording issues +grep "recording" usda_vision_system.log | grep -E "(ERROR|failed)" + +# Time sync problems +grep -E "(time|sync)" usda_vision_system.log | grep -E "(ERROR|WARNING)" +``` + +### Getting Help + +If you encounter issues not covered here: + +1. **Check Logs**: Always start with `usda_vision_system.log` +2. **Run Tests**: Use `python test_system.py` to identify problems +3. **Check Configuration**: Verify `config.json` settings +4. **Test Components**: Use individual test scripts +5. **Check Dependencies**: Ensure all required packages are installed + +### Performance Optimization + +For better performance: +```bash +# Reduce camera check interval (in config.json) +{ + "system": { + "camera_check_interval_seconds": 5 # Increase from 2 to 5 + } +} + +# Optimize recording settings +{ + "cameras": [ + { + "target_fps": 2.0, # Reduce FPS for smaller files + "exposure_ms": 2.0 # Adjust exposure as needed + } + ] +} + +# Enable log rotation +{ + "system": { + "log_level": "INFO" # Reduce from DEBUG to INFO + } +} +``` + +## 🤝 Contributing + +### Development Setup +```bash +# Clone repository +git clone https://github.com/your-username/USDA-Vision-Cameras.git +cd USDA-Vision-Cameras + +# Install development dependencies +uv sync --dev + +# Run tests +python test_system.py +python test_timezone.py +``` + +### Project Structure +``` +usda_vision_system/ +├── core/ # Core functionality (config, state, events, logging) +├── mqtt/ # MQTT client and message handlers +├── camera/ # Camera management, monitoring, recording +├── storage/ # File management and organization +├── api/ # FastAPI server and WebSocket support +└── main.py # Application coordinator +``` + +### Adding Features +1. **New Camera Types**: Extend `camera/recorder.py` +2. **New MQTT Topics**: Update `config.json` and `mqtt/handlers.py` +3. **New API Endpoints**: Add to `api/server.py` +4. **New Events**: Define in `core/events.py` + +## 📄 License + +This project is developed for USDA research purposes. + +## 🆘 Support + +For technical support: +1. Check the troubleshooting section above +2. Review logs in `usda_vision_system.log` +3. Run system diagnostics with `python test_system.py` +4. Check API health at `http://localhost:8000/health` + +--- + +**System Status**: ✅ **READY FOR PRODUCTION** +**Time Sync**: ✅ **ATLANTA, GEORGIA (EDT/EST)** +**API Server**: ✅ **http://localhost:8000** +**Documentation**: ✅ **COMPLETE** diff --git a/api/ai_agent/README.md b/api/ai_agent/README.md new file mode 100644 index 0000000..68a0d71 --- /dev/null +++ b/api/ai_agent/README.md @@ -0,0 +1,50 @@ +# AI Agent Resources + +This directory contains resources specifically designed to help AI agents understand and work with the USDA Vision Camera System. + +## Directory Structure + +### `/guides/` +Contains comprehensive guides for AI agents: +- `AI_AGENT_INSTRUCTIONS.md` - Specific instructions for AI agents working with this system +- `AI_INTEGRATION_GUIDE.md` - Guide for integrating AI capabilities with the camera system + +### `/examples/` +Contains practical examples and demonstrations: +- `demos/` - Python demo scripts showing various system capabilities +- `notebooks/` - Jupyter notebooks with interactive examples and tests + +### `/references/` +Contains API references and technical specifications: +- `api-endpoints.http` - HTTP API endpoint examples +- `api-tests.http` - API testing examples +- `streaming-api.http` - Streaming API examples +- `camera-api.types.ts` - TypeScript type definitions for the camera API + +## Key Learning Resources + +1. **System Architecture**: Review the main system structure in `/usda_vision_system/` +2. **Configuration**: Study `config.json` for system configuration options +3. **API Documentation**: Check `/docs/api/` for API specifications +4. **Feature Guides**: Review `/docs/features/` for feature-specific documentation +5. **Test Examples**: Examine `/tests/` for comprehensive test coverage + +## Quick Start for AI Agents + +1. Read `guides/AI_AGENT_INSTRUCTIONS.md` first +2. Review the demo scripts in `examples/demos/` +3. Study the API references in `references/` +4. Examine test files to understand expected behavior +5. Check configuration options in the root `config.json` + +## System Overview + +The USDA Vision Camera System is a multi-camera monitoring and recording system with: +- Real-time camera streaming +- MQTT-based automation +- Auto-recording capabilities +- RESTful API interface +- Web-based camera preview +- Comprehensive logging and monitoring + +For detailed system documentation, see the `/docs/` directory. diff --git a/api/ai_agent/examples/demos/cv_grab.py b/api/ai_agent/examples/demos/cv_grab.py new file mode 100644 index 0000000..e49ab8b --- /dev/null +++ b/api/ai_agent/examples/demos/cv_grab.py @@ -0,0 +1,95 @@ +#coding=utf-8 +import cv2 +import numpy as np +import mvsdk +import platform + +def main_loop(): + # 枚举相机 + DevList = mvsdk.CameraEnumerateDevice() + nDev = len(DevList) + if nDev < 1: + print("No camera was found!") + return + + for i, DevInfo in enumerate(DevList): + print("{}: {} {}".format(i, DevInfo.GetFriendlyName(), DevInfo.GetPortType())) + i = 0 if nDev == 1 else int(input("Select camera: ")) + DevInfo = DevList[i] + print(DevInfo) + + # 打开相机 + hCamera = 0 + try: + hCamera = mvsdk.CameraInit(DevInfo, -1, -1) + except mvsdk.CameraException as e: + print("CameraInit Failed({}): {}".format(e.error_code, e.message) ) + return + + # 获取相机特性描述 + cap = mvsdk.CameraGetCapability(hCamera) + + # 判断是黑白相机还是彩色相机 + monoCamera = (cap.sIspCapacity.bMonoSensor != 0) + + # 黑白相机让ISP直接输出MONO数据,而不是扩展成R=G=B的24位灰度 + if monoCamera: + mvsdk.CameraSetIspOutFormat(hCamera, mvsdk.CAMERA_MEDIA_TYPE_MONO8) + else: + mvsdk.CameraSetIspOutFormat(hCamera, mvsdk.CAMERA_MEDIA_TYPE_BGR8) + + # 相机模式切换成连续采集 + mvsdk.CameraSetTriggerMode(hCamera, 0) + + # 手动曝光,曝光时间30ms + mvsdk.CameraSetAeState(hCamera, 0) + mvsdk.CameraSetExposureTime(hCamera, 30 * 1000) + + # 让SDK内部取图线程开始工作 + mvsdk.CameraPlay(hCamera) + + # 计算RGB buffer所需的大小,这里直接按照相机的最大分辨率来分配 + FrameBufferSize = cap.sResolutionRange.iWidthMax * cap.sResolutionRange.iHeightMax * (1 if monoCamera else 3) + + # 分配RGB buffer,用来存放ISP输出的图像 + # 备注:从相机传输到PC端的是RAW数据,在PC端通过软件ISP转为RGB数据(如果是黑白相机就不需要转换格式,但是ISP还有其它处理,所以也需要分配这个buffer) + pFrameBuffer = mvsdk.CameraAlignMalloc(FrameBufferSize, 16) + + while (cv2.waitKey(1) & 0xFF) != ord('q'): + # 从相机取一帧图片 + try: + pRawData, FrameHead = mvsdk.CameraGetImageBuffer(hCamera, 200) + mvsdk.CameraImageProcess(hCamera, pRawData, pFrameBuffer, FrameHead) + mvsdk.CameraReleaseImageBuffer(hCamera, pRawData) + + # windows下取到的图像数据是上下颠倒的,以BMP格式存放。转换成opencv则需要上下翻转成正的 + # linux下直接输出正的,不需要上下翻转 + if platform.system() == "Windows": + mvsdk.CameraFlipFrameBuffer(pFrameBuffer, FrameHead, 1) + + # 此时图片已经存储在pFrameBuffer中,对于彩色相机pFrameBuffer=RGB数据,黑白相机pFrameBuffer=8位灰度数据 + # 把pFrameBuffer转换成opencv的图像格式以进行后续算法处理 + frame_data = (mvsdk.c_ubyte * FrameHead.uBytes).from_address(pFrameBuffer) + frame = np.frombuffer(frame_data, dtype=np.uint8) + frame = frame.reshape((FrameHead.iHeight, FrameHead.iWidth, 1 if FrameHead.uiMediaType == mvsdk.CAMERA_MEDIA_TYPE_MONO8 else 3) ) + + frame = cv2.resize(frame, (640,480), interpolation = cv2.INTER_LINEAR) + cv2.imshow("Press q to end", frame) + + except mvsdk.CameraException as e: + if e.error_code != mvsdk.CAMERA_STATUS_TIME_OUT: + print("CameraGetImageBuffer failed({}): {}".format(e.error_code, e.message) ) + + # 关闭相机 + mvsdk.CameraUnInit(hCamera) + + # 释放帧缓存 + mvsdk.CameraAlignFree(pFrameBuffer) + +def main(): + try: + main_loop() + finally: + cv2.destroyAllWindows() + +main() diff --git a/api/ai_agent/examples/demos/cv_grab2.py b/api/ai_agent/examples/demos/cv_grab2.py new file mode 100644 index 0000000..1d257cb --- /dev/null +++ b/api/ai_agent/examples/demos/cv_grab2.py @@ -0,0 +1,127 @@ +#coding=utf-8 +import cv2 +import numpy as np +import mvsdk +import platform + +class Camera(object): + def __init__(self, DevInfo): + super(Camera, self).__init__() + self.DevInfo = DevInfo + self.hCamera = 0 + self.cap = None + self.pFrameBuffer = 0 + + def open(self): + if self.hCamera > 0: + return True + + # 打开相机 + hCamera = 0 + try: + hCamera = mvsdk.CameraInit(self.DevInfo, -1, -1) + except mvsdk.CameraException as e: + print("CameraInit Failed({}): {}".format(e.error_code, e.message) ) + return False + + # 获取相机特性描述 + cap = mvsdk.CameraGetCapability(hCamera) + + # 判断是黑白相机还是彩色相机 + monoCamera = (cap.sIspCapacity.bMonoSensor != 0) + + # 黑白相机让ISP直接输出MONO数据,而不是扩展成R=G=B的24位灰度 + if monoCamera: + mvsdk.CameraSetIspOutFormat(hCamera, mvsdk.CAMERA_MEDIA_TYPE_MONO8) + else: + mvsdk.CameraSetIspOutFormat(hCamera, mvsdk.CAMERA_MEDIA_TYPE_BGR8) + + # 计算RGB buffer所需的大小,这里直接按照相机的最大分辨率来分配 + FrameBufferSize = cap.sResolutionRange.iWidthMax * cap.sResolutionRange.iHeightMax * (1 if monoCamera else 3) + + # 分配RGB buffer,用来存放ISP输出的图像 + # 备注:从相机传输到PC端的是RAW数据,在PC端通过软件ISP转为RGB数据(如果是黑白相机就不需要转换格式,但是ISP还有其它处理,所以也需要分配这个buffer) + pFrameBuffer = mvsdk.CameraAlignMalloc(FrameBufferSize, 16) + + # 相机模式切换成连续采集 + mvsdk.CameraSetTriggerMode(hCamera, 0) + + # 手动曝光,曝光时间30ms + mvsdk.CameraSetAeState(hCamera, 0) + mvsdk.CameraSetExposureTime(hCamera, 30 * 1000) + + # 让SDK内部取图线程开始工作 + mvsdk.CameraPlay(hCamera) + + self.hCamera = hCamera + self.pFrameBuffer = pFrameBuffer + self.cap = cap + return True + + def close(self): + if self.hCamera > 0: + mvsdk.CameraUnInit(self.hCamera) + self.hCamera = 0 + + mvsdk.CameraAlignFree(self.pFrameBuffer) + self.pFrameBuffer = 0 + + def grab(self): + # 从相机取一帧图片 + hCamera = self.hCamera + pFrameBuffer = self.pFrameBuffer + try: + pRawData, FrameHead = mvsdk.CameraGetImageBuffer(hCamera, 200) + mvsdk.CameraImageProcess(hCamera, pRawData, pFrameBuffer, FrameHead) + mvsdk.CameraReleaseImageBuffer(hCamera, pRawData) + + # windows下取到的图像数据是上下颠倒的,以BMP格式存放。转换成opencv则需要上下翻转成正的 + # linux下直接输出正的,不需要上下翻转 + if platform.system() == "Windows": + mvsdk.CameraFlipFrameBuffer(pFrameBuffer, FrameHead, 1) + + # 此时图片已经存储在pFrameBuffer中,对于彩色相机pFrameBuffer=RGB数据,黑白相机pFrameBuffer=8位灰度数据 + # 把pFrameBuffer转换成opencv的图像格式以进行后续算法处理 + frame_data = (mvsdk.c_ubyte * FrameHead.uBytes).from_address(pFrameBuffer) + frame = np.frombuffer(frame_data, dtype=np.uint8) + frame = frame.reshape((FrameHead.iHeight, FrameHead.iWidth, 1 if FrameHead.uiMediaType == mvsdk.CAMERA_MEDIA_TYPE_MONO8 else 3) ) + return frame + except mvsdk.CameraException as e: + if e.error_code != mvsdk.CAMERA_STATUS_TIME_OUT: + print("CameraGetImageBuffer failed({}): {}".format(e.error_code, e.message) ) + return None + +def main_loop(): + # 枚举相机 + DevList = mvsdk.CameraEnumerateDevice() + nDev = len(DevList) + if nDev < 1: + print("No camera was found!") + return + + for i, DevInfo in enumerate(DevList): + print("{}: {} {}".format(i, DevInfo.GetFriendlyName(), DevInfo.GetPortType())) + + cams = [] + for i in map(lambda x: int(x), raw_input("Select cameras: ").split()): + cam = Camera(DevList[i]) + if cam.open(): + cams.append(cam) + + while (cv2.waitKey(1) & 0xFF) != ord('q'): + for cam in cams: + frame = cam.grab() + if frame is not None: + frame = cv2.resize(frame, (640,480), interpolation = cv2.INTER_LINEAR) + cv2.imshow("{} Press q to end".format(cam.DevInfo.GetFriendlyName()), frame) + + for cam in cams: + cam.close() + +def main(): + try: + main_loop() + finally: + cv2.destroyAllWindows() + +main() diff --git a/api/ai_agent/examples/demos/cv_grab_callback.py b/api/ai_agent/examples/demos/cv_grab_callback.py new file mode 100644 index 0000000..137868d --- /dev/null +++ b/api/ai_agent/examples/demos/cv_grab_callback.py @@ -0,0 +1,110 @@ +#coding=utf-8 +import cv2 +import numpy as np +import mvsdk +import time +import platform + +class App(object): + def __init__(self): + super(App, self).__init__() + self.pFrameBuffer = 0 + self.quit = False + + def main(self): + # 枚举相机 + DevList = mvsdk.CameraEnumerateDevice() + nDev = len(DevList) + if nDev < 1: + print("No camera was found!") + return + + for i, DevInfo in enumerate(DevList): + print("{}: {} {}".format(i, DevInfo.GetFriendlyName(), DevInfo.GetPortType())) + i = 0 if nDev == 1 else int(input("Select camera: ")) + DevInfo = DevList[i] + print(DevInfo) + + # 打开相机 + hCamera = 0 + try: + hCamera = mvsdk.CameraInit(DevInfo, -1, -1) + except mvsdk.CameraException as e: + print("CameraInit Failed({}): {}".format(e.error_code, e.message) ) + return + + # 获取相机特性描述 + cap = mvsdk.CameraGetCapability(hCamera) + + # 判断是黑白相机还是彩色相机 + monoCamera = (cap.sIspCapacity.bMonoSensor != 0) + + # 黑白相机让ISP直接输出MONO数据,而不是扩展成R=G=B的24位灰度 + if monoCamera: + mvsdk.CameraSetIspOutFormat(hCamera, mvsdk.CAMERA_MEDIA_TYPE_MONO8) + else: + mvsdk.CameraSetIspOutFormat(hCamera, mvsdk.CAMERA_MEDIA_TYPE_BGR8) + + # 相机模式切换成连续采集 + mvsdk.CameraSetTriggerMode(hCamera, 0) + + # 手动曝光,曝光时间30ms + mvsdk.CameraSetAeState(hCamera, 0) + mvsdk.CameraSetExposureTime(hCamera, 30 * 1000) + + # 让SDK内部取图线程开始工作 + mvsdk.CameraPlay(hCamera) + + # 计算RGB buffer所需的大小,这里直接按照相机的最大分辨率来分配 + FrameBufferSize = cap.sResolutionRange.iWidthMax * cap.sResolutionRange.iHeightMax * (1 if monoCamera else 3) + + # 分配RGB buffer,用来存放ISP输出的图像 + # 备注:从相机传输到PC端的是RAW数据,在PC端通过软件ISP转为RGB数据(如果是黑白相机就不需要转换格式,但是ISP还有其它处理,所以也需要分配这个buffer) + self.pFrameBuffer = mvsdk.CameraAlignMalloc(FrameBufferSize, 16) + + # 设置采集回调函数 + self.quit = False + mvsdk.CameraSetCallbackFunction(hCamera, self.GrabCallback, 0) + + # 等待退出 + while not self.quit: + time.sleep(0.1) + + # 关闭相机 + mvsdk.CameraUnInit(hCamera) + + # 释放帧缓存 + mvsdk.CameraAlignFree(self.pFrameBuffer) + + @mvsdk.method(mvsdk.CAMERA_SNAP_PROC) + def GrabCallback(self, hCamera, pRawData, pFrameHead, pContext): + FrameHead = pFrameHead[0] + pFrameBuffer = self.pFrameBuffer + + mvsdk.CameraImageProcess(hCamera, pRawData, pFrameBuffer, FrameHead) + mvsdk.CameraReleaseImageBuffer(hCamera, pRawData) + + # windows下取到的图像数据是上下颠倒的,以BMP格式存放。转换成opencv则需要上下翻转成正的 + # linux下直接输出正的,不需要上下翻转 + if platform.system() == "Windows": + mvsdk.CameraFlipFrameBuffer(pFrameBuffer, FrameHead, 1) + + # 此时图片已经存储在pFrameBuffer中,对于彩色相机pFrameBuffer=RGB数据,黑白相机pFrameBuffer=8位灰度数据 + # 把pFrameBuffer转换成opencv的图像格式以进行后续算法处理 + frame_data = (mvsdk.c_ubyte * FrameHead.uBytes).from_address(pFrameBuffer) + frame = np.frombuffer(frame_data, dtype=np.uint8) + frame = frame.reshape((FrameHead.iHeight, FrameHead.iWidth, 1 if FrameHead.uiMediaType == mvsdk.CAMERA_MEDIA_TYPE_MONO8 else 3) ) + + frame = cv2.resize(frame, (640,480), interpolation = cv2.INTER_LINEAR) + cv2.imshow("Press q to end", frame) + if (cv2.waitKey(1) & 0xFF) == ord('q'): + self.quit = True + +def main(): + try: + app = App() + app.main() + finally: + cv2.destroyAllWindows() + +main() diff --git a/api/ai_agent/examples/demos/demo_mqtt_console.py b/api/ai_agent/examples/demos/demo_mqtt_console.py new file mode 100644 index 0000000..b31670d --- /dev/null +++ b/api/ai_agent/examples/demos/demo_mqtt_console.py @@ -0,0 +1,117 @@ +#!/usr/bin/env python3 +""" +Demo script to show MQTT console logging in action. + +This script demonstrates the enhanced MQTT logging by starting just the MQTT client +and showing the console output. +""" + +import sys +import os +import time +import signal +import logging + +# Add the current directory to Python path +sys.path.insert(0, os.path.dirname(os.path.abspath(__file__))) + +from usda_vision_system.core.config import Config +from usda_vision_system.core.state_manager import StateManager +from usda_vision_system.core.events import EventSystem +from usda_vision_system.core.logging_config import setup_logging +from usda_vision_system.mqtt.client import MQTTClient + +def signal_handler(signum, frame): + """Handle Ctrl+C gracefully""" + print("\n🛑 Stopping MQTT demo...") + sys.exit(0) + +def main(): + """Main demo function""" + print("🚀 MQTT Console Logging Demo") + print("=" * 50) + print() + print("This demo shows enhanced MQTT console logging.") + print("You'll see colorful console output for MQTT events:") + print(" 🔗 Connection status") + print(" 📋 Topic subscriptions") + print(" 📡 Incoming messages") + print(" ⚠️ Disconnections and errors") + print() + print("Press Ctrl+C to stop the demo.") + print("=" * 50) + + # Setup signal handler + signal.signal(signal.SIGINT, signal_handler) + + try: + # Setup logging with INFO level for console visibility + setup_logging(log_level="INFO", log_file="mqtt_demo.log") + + # Load configuration + config = Config() + + # Initialize components + state_manager = StateManager() + event_system = EventSystem() + + # Create MQTT client + mqtt_client = MQTTClient(config, state_manager, event_system) + + print(f"\n🔧 Configuration:") + print(f" Broker: {config.mqtt.broker_host}:{config.mqtt.broker_port}") + print(f" Topics: {list(config.mqtt.topics.values())}") + print() + + # Start MQTT client + print("🚀 Starting MQTT client...") + if mqtt_client.start(): + print("✅ MQTT client started successfully!") + print("\n👀 Watching for MQTT messages... (Press Ctrl+C to stop)") + print("-" * 50) + + # Keep running and show periodic status + start_time = time.time() + last_status_time = start_time + + while True: + time.sleep(1) + + # Show status every 30 seconds + current_time = time.time() + if current_time - last_status_time >= 30: + status = mqtt_client.get_status() + uptime = current_time - start_time + print(f"\n📊 Status Update (uptime: {uptime:.0f}s):") + print(f" Connected: {status['connected']}") + print(f" Messages: {status['message_count']}") + print(f" Errors: {status['error_count']}") + if status['last_message_time']: + print(f" Last Message: {status['last_message_time']}") + print("-" * 50) + last_status_time = current_time + + else: + print("❌ Failed to start MQTT client") + print(" Check your MQTT broker configuration in config.json") + print(" Make sure the broker is running and accessible") + + except KeyboardInterrupt: + print("\n🛑 Demo stopped by user") + except Exception as e: + print(f"\n❌ Error: {e}") + finally: + # Cleanup + try: + if 'mqtt_client' in locals(): + mqtt_client.stop() + print("🔌 MQTT client stopped") + except: + pass + + print("\n👋 Demo completed!") + print("\n💡 To run the full system with this enhanced logging:") + print(" python main.py") + +if __name__ == "__main__": + main() diff --git a/api/ai_agent/examples/demos/grab.py b/api/ai_agent/examples/demos/grab.py new file mode 100644 index 0000000..59bfe2c --- /dev/null +++ b/api/ai_agent/examples/demos/grab.py @@ -0,0 +1,111 @@ +#coding=utf-8 +import mvsdk + +def main(): + # 枚举相机 + DevList = mvsdk.CameraEnumerateDevice() + nDev = len(DevList) + if nDev < 1: + print("No camera was found!") + return + + for i, DevInfo in enumerate(DevList): + print("{}: {} {}".format(i, DevInfo.GetFriendlyName(), DevInfo.GetPortType())) + i = 0 if nDev == 1 else int(input("Select camera: ")) + DevInfo = DevList[i] + print(DevInfo) + + # 打开相机 + hCamera = 0 + try: + hCamera = mvsdk.CameraInit(DevInfo, -1, -1) + except mvsdk.CameraException as e: + print("CameraInit Failed({}): {}".format(e.error_code, e.message) ) + return + + # 获取相机特性描述 + cap = mvsdk.CameraGetCapability(hCamera) + PrintCapbility(cap) + + # 判断是黑白相机还是彩色相机 + monoCamera = (cap.sIspCapacity.bMonoSensor != 0) + + # 黑白相机让ISP直接输出MONO数据,而不是扩展成R=G=B的24位灰度 + if monoCamera: + mvsdk.CameraSetIspOutFormat(hCamera, mvsdk.CAMERA_MEDIA_TYPE_MONO8) + + # 相机模式切换成连续采集 + mvsdk.CameraSetTriggerMode(hCamera, 0) + + # 手动曝光,曝光时间30ms + mvsdk.CameraSetAeState(hCamera, 0) + mvsdk.CameraSetExposureTime(hCamera, 30 * 1000) + + # 让SDK内部取图线程开始工作 + mvsdk.CameraPlay(hCamera) + + # 计算RGB buffer所需的大小,这里直接按照相机的最大分辨率来分配 + FrameBufferSize = cap.sResolutionRange.iWidthMax * cap.sResolutionRange.iHeightMax * (1 if monoCamera else 3) + + # 分配RGB buffer,用来存放ISP输出的图像 + # 备注:从相机传输到PC端的是RAW数据,在PC端通过软件ISP转为RGB数据(如果是黑白相机就不需要转换格式,但是ISP还有其它处理,所以也需要分配这个buffer) + pFrameBuffer = mvsdk.CameraAlignMalloc(FrameBufferSize, 16) + + # 从相机取一帧图片 + try: + pRawData, FrameHead = mvsdk.CameraGetImageBuffer(hCamera, 2000) + mvsdk.CameraImageProcess(hCamera, pRawData, pFrameBuffer, FrameHead) + mvsdk.CameraReleaseImageBuffer(hCamera, pRawData) + + # 此时图片已经存储在pFrameBuffer中,对于彩色相机pFrameBuffer=RGB数据,黑白相机pFrameBuffer=8位灰度数据 + # 该示例中我们只是把图片保存到硬盘文件中 + status = mvsdk.CameraSaveImage(hCamera, "./grab.bmp", pFrameBuffer, FrameHead, mvsdk.FILE_BMP, 100) + if status == mvsdk.CAMERA_STATUS_SUCCESS: + print("Save image successfully. image_size = {}X{}".format(FrameHead.iWidth, FrameHead.iHeight) ) + else: + print("Save image failed. err={}".format(status) ) + except mvsdk.CameraException as e: + print("CameraGetImageBuffer failed({}): {}".format(e.error_code, e.message) ) + + # 关闭相机 + mvsdk.CameraUnInit(hCamera) + + # 释放帧缓存 + mvsdk.CameraAlignFree(pFrameBuffer) + +def PrintCapbility(cap): + for i in range(cap.iTriggerDesc): + desc = cap.pTriggerDesc[i] + print("{}: {}".format(desc.iIndex, desc.GetDescription()) ) + for i in range(cap.iImageSizeDesc): + desc = cap.pImageSizeDesc[i] + print("{}: {}".format(desc.iIndex, desc.GetDescription()) ) + for i in range(cap.iClrTempDesc): + desc = cap.pClrTempDesc[i] + print("{}: {}".format(desc.iIndex, desc.GetDescription()) ) + for i in range(cap.iMediaTypeDesc): + desc = cap.pMediaTypeDesc[i] + print("{}: {}".format(desc.iIndex, desc.GetDescription()) ) + for i in range(cap.iFrameSpeedDesc): + desc = cap.pFrameSpeedDesc[i] + print("{}: {}".format(desc.iIndex, desc.GetDescription()) ) + for i in range(cap.iPackLenDesc): + desc = cap.pPackLenDesc[i] + print("{}: {}".format(desc.iIndex, desc.GetDescription()) ) + for i in range(cap.iPresetLut): + desc = cap.pPresetLutDesc[i] + print("{}: {}".format(desc.iIndex, desc.GetDescription()) ) + for i in range(cap.iAeAlmSwDesc): + desc = cap.pAeAlmSwDesc[i] + print("{}: {}".format(desc.iIndex, desc.GetDescription()) ) + for i in range(cap.iAeAlmHdDesc): + desc = cap.pAeAlmHdDesc[i] + print("{}: {}".format(desc.iIndex, desc.GetDescription()) ) + for i in range(cap.iBayerDecAlmSwDesc): + desc = cap.pBayerDecAlmSwDesc[i] + print("{}: {}".format(desc.iIndex, desc.GetDescription()) ) + for i in range(cap.iBayerDecAlmHdDesc): + desc = cap.pBayerDecAlmHdDesc[i] + print("{}: {}".format(desc.iIndex, desc.GetDescription()) ) + +main() diff --git a/api/ai_agent/examples/demos/mqtt_publisher_test.py b/api/ai_agent/examples/demos/mqtt_publisher_test.py new file mode 100644 index 0000000..a9b3ac6 --- /dev/null +++ b/api/ai_agent/examples/demos/mqtt_publisher_test.py @@ -0,0 +1,234 @@ +#!/usr/bin/env python3 +""" +MQTT Publisher Test Script for USDA Vision Camera System + +This script allows you to manually publish test messages to the MQTT topics +to simulate machine state changes for testing purposes. + +Usage: + python mqtt_publisher_test.py + +The script provides an interactive menu to: +1. Send 'on' state to vibratory conveyor +2. Send 'off' state to vibratory conveyor +3. Send 'on' state to blower separator +4. Send 'off' state to blower separator +5. Send custom message +""" + +import paho.mqtt.client as mqtt +import time +import sys +from datetime import datetime + +# MQTT Configuration (matching your system config) +MQTT_BROKER_HOST = "192.168.1.110" +MQTT_BROKER_PORT = 1883 +MQTT_USERNAME = None # Set if your broker requires authentication +MQTT_PASSWORD = None # Set if your broker requires authentication + +# Topics (from your config.json) +MQTT_TOPICS = { + "vibratory_conveyor": "vision/vibratory_conveyor/state", + "blower_separator": "vision/blower_separator/state" +} + +class MQTTPublisher: + def __init__(self): + self.client = None + self.connected = False + + def setup_client(self): + """Setup MQTT client""" + try: + self.client = mqtt.Client(mqtt.CallbackAPIVersion.VERSION1) + self.client.on_connect = self.on_connect + self.client.on_disconnect = self.on_disconnect + self.client.on_publish = self.on_publish + + if MQTT_USERNAME and MQTT_PASSWORD: + self.client.username_pw_set(MQTT_USERNAME, MQTT_PASSWORD) + + return True + except Exception as e: + print(f"❌ Error setting up MQTT client: {e}") + return False + + def connect(self): + """Connect to MQTT broker""" + try: + print(f"🔗 Connecting to MQTT broker at {MQTT_BROKER_HOST}:{MQTT_BROKER_PORT}...") + self.client.connect(MQTT_BROKER_HOST, MQTT_BROKER_PORT, 60) + self.client.loop_start() # Start background loop + + # Wait for connection + timeout = 10 + start_time = time.time() + while not self.connected and (time.time() - start_time) < timeout: + time.sleep(0.1) + + return self.connected + + except Exception as e: + print(f"❌ Failed to connect to MQTT broker: {e}") + return False + + def disconnect(self): + """Disconnect from MQTT broker""" + if self.client: + self.client.loop_stop() + self.client.disconnect() + + def on_connect(self, client, userdata, flags, rc): + """Callback when client connects""" + if rc == 0: + self.connected = True + print(f"✅ Connected to MQTT broker successfully!") + else: + self.connected = False + print(f"❌ Connection failed with return code {rc}") + + def on_disconnect(self, client, userdata, rc): + """Callback when client disconnects""" + self.connected = False + print(f"🔌 Disconnected from MQTT broker") + + def on_publish(self, client, userdata, mid): + """Callback when message is published""" + print(f"📤 Message published successfully (mid: {mid})") + + def publish_message(self, topic, payload): + """Publish a message to a topic""" + if not self.connected: + print("❌ Not connected to MQTT broker") + return False + + try: + timestamp = datetime.now().strftime('%H:%M:%S.%f')[:-3] + print(f"📡 [{timestamp}] Publishing message:") + print(f" 📍 Topic: {topic}") + print(f" 📄 Payload: '{payload}'") + + result = self.client.publish(topic, payload) + + if result.rc == mqtt.MQTT_ERR_SUCCESS: + print(f"✅ Message queued for publishing") + return True + else: + print(f"❌ Failed to publish message (error: {result.rc})") + return False + + except Exception as e: + print(f"❌ Error publishing message: {e}") + return False + + def show_menu(self): + """Show interactive menu""" + print("\n" + "=" * 50) + print("🎛️ MQTT PUBLISHER TEST MENU") + print("=" * 50) + print("1. Send 'on' to vibratory conveyor") + print("2. Send 'off' to vibratory conveyor") + print("3. Send 'on' to blower separator") + print("4. Send 'off' to blower separator") + print("5. Send custom message") + print("6. Show current topics") + print("0. Exit") + print("-" * 50) + + def handle_menu_choice(self, choice): + """Handle menu selection""" + if choice == "1": + self.publish_message(MQTT_TOPICS["vibratory_conveyor"], "on") + elif choice == "2": + self.publish_message(MQTT_TOPICS["vibratory_conveyor"], "off") + elif choice == "3": + self.publish_message(MQTT_TOPICS["blower_separator"], "on") + elif choice == "4": + self.publish_message(MQTT_TOPICS["blower_separator"], "off") + elif choice == "5": + self.custom_message() + elif choice == "6": + self.show_topics() + elif choice == "0": + return False + else: + print("❌ Invalid choice. Please try again.") + + return True + + def custom_message(self): + """Send custom message""" + print("\n📝 Custom Message") + print("Available topics:") + for i, (name, topic) in enumerate(MQTT_TOPICS.items(), 1): + print(f" {i}. {name}: {topic}") + + try: + topic_choice = input("Select topic (1-2): ").strip() + if topic_choice == "1": + topic = MQTT_TOPICS["vibratory_conveyor"] + elif topic_choice == "2": + topic = MQTT_TOPICS["blower_separator"] + else: + print("❌ Invalid topic choice") + return + + payload = input("Enter message payload: ").strip() + if payload: + self.publish_message(topic, payload) + else: + print("❌ Empty payload, message not sent") + + except KeyboardInterrupt: + print("\n❌ Cancelled") + + def show_topics(self): + """Show configured topics""" + print("\n📋 Configured Topics:") + for name, topic in MQTT_TOPICS.items(): + print(f" 🏭 {name}: {topic}") + + def run(self): + """Main interactive loop""" + print("📤 MQTT Publisher Test") + print("=" * 50) + print(f"🎯 Broker: {MQTT_BROKER_HOST}:{MQTT_BROKER_PORT}") + + if not self.setup_client(): + return False + + if not self.connect(): + print("❌ Failed to connect to MQTT broker") + return False + + try: + while True: + self.show_menu() + choice = input("Enter your choice: ").strip() + + if not self.handle_menu_choice(choice): + break + + except KeyboardInterrupt: + print("\n\n🛑 Interrupted by user") + except Exception as e: + print(f"\n❌ Error: {e}") + finally: + self.disconnect() + print("👋 Goodbye!") + + return True + +def main(): + """Main function""" + publisher = MQTTPublisher() + + try: + publisher.run() + except Exception as e: + print(f"❌ Unexpected error: {e}") + sys.exit(1) + +if __name__ == "__main__": + main() diff --git a/api/ai_agent/examples/demos/mqtt_test.py b/api/ai_agent/examples/demos/mqtt_test.py new file mode 100644 index 0000000..2e50796 --- /dev/null +++ b/api/ai_agent/examples/demos/mqtt_test.py @@ -0,0 +1,242 @@ +#!/usr/bin/env python3 +""" +MQTT Test Script for USDA Vision Camera System + +This script tests MQTT message reception by connecting to the broker +and listening for messages on the configured topics. + +Usage: + python mqtt_test.py + +The script will: +1. Connect to the MQTT broker +2. Subscribe to all configured topics +3. Display received messages with timestamps +4. Show connection status and statistics +""" + +import paho.mqtt.client as mqtt +import time +import json +import signal +import sys +from datetime import datetime +from typing import Dict, Optional + +# MQTT Configuration (matching your system config) +MQTT_BROKER_HOST = "192.168.1.110" +MQTT_BROKER_PORT = 1883 +MQTT_USERNAME = None # Set if your broker requires authentication +MQTT_PASSWORD = None # Set if your broker requires authentication + +# Topics to monitor (from your config.json) +MQTT_TOPICS = { + "vibratory_conveyor": "vision/vibratory_conveyor/state", + "blower_separator": "vision/blower_separator/state" +} + +class MQTTTester: + def __init__(self): + self.client: Optional[mqtt.Client] = None + self.connected = False + self.message_count = 0 + self.start_time = None + self.last_message_time = None + self.received_messages = [] + + def setup_client(self): + """Setup MQTT client with callbacks""" + try: + # Create MQTT client + self.client = mqtt.Client(mqtt.CallbackAPIVersion.VERSION1) + + # Set callbacks + self.client.on_connect = self.on_connect + self.client.on_disconnect = self.on_disconnect + self.client.on_message = self.on_message + self.client.on_subscribe = self.on_subscribe + + # Set authentication if provided + if MQTT_USERNAME and MQTT_PASSWORD: + self.client.username_pw_set(MQTT_USERNAME, MQTT_PASSWORD) + print(f"🔐 Using authentication: {MQTT_USERNAME}") + + return True + + except Exception as e: + print(f"❌ Error setting up MQTT client: {e}") + return False + + def connect(self): + """Connect to MQTT broker""" + try: + print(f"🔗 Connecting to MQTT broker at {MQTT_BROKER_HOST}:{MQTT_BROKER_PORT}...") + self.client.connect(MQTT_BROKER_HOST, MQTT_BROKER_PORT, 60) + return True + + except Exception as e: + print(f"❌ Failed to connect to MQTT broker: {e}") + return False + + def on_connect(self, client, userdata, flags, rc): + """Callback when client connects to broker""" + if rc == 0: + self.connected = True + self.start_time = datetime.now() + print(f"✅ Successfully connected to MQTT broker!") + print(f"📅 Connection time: {self.start_time.strftime('%Y-%m-%d %H:%M:%S')}") + print() + + # Subscribe to all topics + print("📋 Subscribing to topics:") + for machine_name, topic in MQTT_TOPICS.items(): + result, mid = client.subscribe(topic) + if result == mqtt.MQTT_ERR_SUCCESS: + print(f" ✅ {machine_name}: {topic}") + else: + print(f" ❌ {machine_name}: {topic} (error: {result})") + + print() + print("🎧 Listening for MQTT messages...") + print(" (Manually turn machines on/off to trigger messages)") + print(" (Press Ctrl+C to stop)") + print("-" * 60) + + else: + self.connected = False + print(f"❌ Connection failed with return code {rc}") + print(" Return codes:") + print(" 0: Connection successful") + print(" 1: Connection refused - incorrect protocol version") + print(" 2: Connection refused - invalid client identifier") + print(" 3: Connection refused - server unavailable") + print(" 4: Connection refused - bad username or password") + print(" 5: Connection refused - not authorised") + + def on_disconnect(self, client, userdata, rc): + """Callback when client disconnects from broker""" + self.connected = False + if rc != 0: + print(f"🔌 Unexpected disconnection from MQTT broker (code: {rc})") + else: + print(f"🔌 Disconnected from MQTT broker") + + def on_subscribe(self, client, userdata, mid, granted_qos): + """Callback when subscription is confirmed""" + print(f"📋 Subscription confirmed (mid: {mid}, QoS: {granted_qos})") + + def on_message(self, client, userdata, msg): + """Callback when a message is received""" + try: + # Decode message + topic = msg.topic + payload = msg.payload.decode("utf-8").strip() + timestamp = datetime.now() + + # Update statistics + self.message_count += 1 + self.last_message_time = timestamp + + # Find machine name + machine_name = "unknown" + for name, configured_topic in MQTT_TOPICS.items(): + if topic == configured_topic: + machine_name = name + break + + # Store message + message_data = { + "timestamp": timestamp, + "topic": topic, + "machine": machine_name, + "payload": payload, + "message_number": self.message_count + } + self.received_messages.append(message_data) + + # Display message + time_str = timestamp.strftime('%H:%M:%S.%f')[:-3] # Include milliseconds + print(f"📡 [{time_str}] Message #{self.message_count}") + print(f" 🏭 Machine: {machine_name}") + print(f" 📍 Topic: {topic}") + print(f" 📄 Payload: '{payload}'") + print(f" 📊 Total messages: {self.message_count}") + print("-" * 60) + + except Exception as e: + print(f"❌ Error processing message: {e}") + + def show_statistics(self): + """Show connection and message statistics""" + print("\n" + "=" * 60) + print("📊 MQTT TEST STATISTICS") + print("=" * 60) + + if self.start_time: + runtime = datetime.now() - self.start_time + print(f"⏱️ Runtime: {runtime}") + + print(f"🔗 Connected: {'Yes' if self.connected else 'No'}") + print(f"📡 Messages received: {self.message_count}") + + if self.last_message_time: + print(f"🕐 Last message: {self.last_message_time.strftime('%Y-%m-%d %H:%M:%S')}") + + if self.received_messages: + print(f"\n📋 Message Summary:") + for msg in self.received_messages[-5:]: # Show last 5 messages + time_str = msg["timestamp"].strftime('%H:%M:%S') + print(f" [{time_str}] {msg['machine']}: {msg['payload']}") + + print("=" * 60) + + def run(self): + """Main test loop""" + print("🧪 MQTT Message Reception Test") + print("=" * 60) + print(f"🎯 Broker: {MQTT_BROKER_HOST}:{MQTT_BROKER_PORT}") + print(f"📋 Topics: {list(MQTT_TOPICS.values())}") + print() + + # Setup signal handler for graceful shutdown + def signal_handler(sig, frame): + print(f"\n\n🛑 Received interrupt signal, shutting down...") + self.show_statistics() + if self.client and self.connected: + self.client.disconnect() + sys.exit(0) + + signal.signal(signal.SIGINT, signal_handler) + + # Setup and connect + if not self.setup_client(): + return False + + if not self.connect(): + return False + + # Start the client loop + try: + self.client.loop_forever() + except KeyboardInterrupt: + pass + except Exception as e: + print(f"❌ Error in main loop: {e}") + + return True + +def main(): + """Main function""" + tester = MQTTTester() + + try: + success = tester.run() + if not success: + print("❌ Test failed") + sys.exit(1) + except Exception as e: + print(f"❌ Unexpected error: {e}") + sys.exit(1) + +if __name__ == "__main__": + main() diff --git a/api/ai_agent/examples/demos/readme.txt b/api/ai_agent/examples/demos/readme.txt new file mode 100644 index 0000000..749c79e --- /dev/null +++ b/api/ai_agent/examples/demos/readme.txt @@ -0,0 +1,4 @@ +mvsdk.py: 相机SDK接口库(参考文档 WindowsSDK安装目录\Document\MVSDK_API_CHS.chm) + +grab.py: 使用SDK采集图片,并保存到硬盘文件 +cv_grab.py: 使用SDK采集图片,转换为opencv的图像格式 diff --git a/api/ai_agent/examples/notebooks/camera_status_test.ipynb b/api/ai_agent/examples/notebooks/camera_status_test.ipynb new file mode 100644 index 0000000..26662fa --- /dev/null +++ b/api/ai_agent/examples/notebooks/camera_status_test.ipynb @@ -0,0 +1,607 @@ +{ + "cells": [ + { + "cell_type": "markdown", + "id": "intro", + "metadata": {}, + "source": [ + "# Camera Status and Availability Testing\n", + "\n", + "This notebook tests various methods to check camera status and availability before attempting to capture images.\n", + "\n", + "## Key Functions to Test:\n", + "- `CameraIsOpened()` - Check if camera is already opened by another process\n", + "- `CameraInit()` - Try to initialize and catch specific error codes\n", + "- `CameraGetImageBuffer()` - Test actual image capture with timeout\n", + "- Error code analysis for different failure scenarios" + ] + }, + { + "cell_type": "code", + "execution_count": 26, + "id": "imports", + "metadata": {}, + "outputs": [ + { + "name": "stdout", + "output_type": "stream", + "text": [ + "Libraries imported successfully!\n", + "Platform: Linux\n" + ] + } + ], + "source": [ + "# Import required libraries\n", + "import os\n", + "import sys\n", + "import time\n", + "import numpy as np\n", + "import cv2\n", + "import platform\n", + "from datetime import datetime\n", + "\n", + "# Add the python demo directory to path to import mvsdk\n", + "sys.path.append('../python demo')\n", + "import mvsdk\n", + "\n", + "print(\"Libraries imported successfully!\")\n", + "print(f\"Platform: {platform.system()}\")" + ] + }, + { + "cell_type": "code", + "execution_count": 27, + "id": "error-codes", + "metadata": {}, + "outputs": [ + { + "name": "stdout", + "output_type": "stream", + "text": [ + "Relevant Camera Status Error Codes:\n", + "========================================\n", + "CAMERA_STATUS_SUCCESS: 0\n", + "CAMERA_STATUS_DEVICE_IS_OPENED: -18\n", + "CAMERA_STATUS_DEVICE_IS_CLOSED: -19\n", + "CAMERA_STATUS_ACCESS_DENY: -45\n", + "CAMERA_STATUS_DEVICE_LOST: -38\n", + "CAMERA_STATUS_TIME_OUT: -12\n", + "CAMERA_STATUS_BUSY: -28\n", + "CAMERA_STATUS_NO_DEVICE_FOUND: -16\n" + ] + } + ], + "source": [ + "# Let's examine the relevant error codes from the SDK\n", + "print(\"Relevant Camera Status Error Codes:\")\n", + "print(\"=\" * 40)\n", + "print(f\"CAMERA_STATUS_SUCCESS: {mvsdk.CAMERA_STATUS_SUCCESS}\")\n", + "print(f\"CAMERA_STATUS_DEVICE_IS_OPENED: {mvsdk.CAMERA_STATUS_DEVICE_IS_OPENED}\")\n", + "print(f\"CAMERA_STATUS_DEVICE_IS_CLOSED: {mvsdk.CAMERA_STATUS_DEVICE_IS_CLOSED}\")\n", + "print(f\"CAMERA_STATUS_ACCESS_DENY: {mvsdk.CAMERA_STATUS_ACCESS_DENY}\")\n", + "print(f\"CAMERA_STATUS_DEVICE_LOST: {mvsdk.CAMERA_STATUS_DEVICE_LOST}\")\n", + "print(f\"CAMERA_STATUS_TIME_OUT: {mvsdk.CAMERA_STATUS_TIME_OUT}\")\n", + "print(f\"CAMERA_STATUS_BUSY: {mvsdk.CAMERA_STATUS_BUSY}\")\n", + "print(f\"CAMERA_STATUS_NO_DEVICE_FOUND: {mvsdk.CAMERA_STATUS_NO_DEVICE_FOUND}\")" + ] + }, + { + "cell_type": "code", + "execution_count": 28, + "id": "status-functions", + "metadata": {}, + "outputs": [ + { + "name": "stdout", + "output_type": "stream", + "text": [ + "Camera Availability Check\n", + "==============================\n", + "✓ SDK initialized successfully\n", + "✓ Found 2 camera(s)\n", + " 0: Blower-Yield-Cam (192.168.1.165-192.168.1.54)\n", + " 1: Cracker-Cam (192.168.1.167-192.168.1.54)\n", + "\n", + "Testing camera 0: Blower-Yield-Cam\n", + "✓ Camera is available (not opened by another process)\n", + "✓ Camera initialized successfully\n", + "✓ Camera closed after testing\n", + "\n", + "Testing camera 1: Cracker-Cam\n", + "✓ Camera is available (not opened by another process)\n", + "✓ Camera initialized successfully\n", + "✓ Camera closed after testing\n", + "\n", + "Results for 2 cameras:\n", + " Camera 0: AVAILABLE\n", + " Camera 1: AVAILABLE\n" + ] + } + ], + "source": [ + "def check_camera_availability():\n", + " \"\"\"\n", + " Comprehensive camera availability check\n", + " \"\"\"\n", + " print(\"Camera Availability Check\")\n", + " print(\"=\" * 30)\n", + " \n", + " # Step 1: Initialize SDK\n", + " try:\n", + " mvsdk.CameraSdkInit(1)\n", + " print(\"✓ SDK initialized successfully\")\n", + " except Exception as e:\n", + " print(f\"✗ SDK initialization failed: {e}\")\n", + " return None, \"SDK_INIT_FAILED\"\n", + " \n", + " # Step 2: Enumerate cameras\n", + " try:\n", + " DevList = mvsdk.CameraEnumerateDevice()\n", + " nDev = len(DevList)\n", + " print(f\"✓ Found {nDev} camera(s)\")\n", + " \n", + " if nDev < 1:\n", + " print(\"✗ No cameras detected\")\n", + " return None, \"NO_CAMERAS\"\n", + " \n", + " for i, DevInfo in enumerate(DevList):\n", + " print(f\" {i}: {DevInfo.GetFriendlyName()} ({DevInfo.GetPortType()})\")\n", + " \n", + " except Exception as e:\n", + " print(f\"✗ Camera enumeration failed: {e}\")\n", + " return None, \"ENUM_FAILED\"\n", + " \n", + " # Step 3: Check all cameras\n", + " camera_results = []\n", + " \n", + " for i, DevInfo in enumerate(DevList):\n", + " print(f\"\\nTesting camera {i}: {DevInfo.GetFriendlyName()}\")\n", + " \n", + " # Check if camera is already opened\n", + " try:\n", + " is_opened = mvsdk.CameraIsOpened(DevInfo)\n", + " if is_opened:\n", + " print(\"✗ Camera is already opened by another process\")\n", + " camera_results.append((DevInfo, \"ALREADY_OPENED\"))\n", + " continue\n", + " else:\n", + " print(\"✓ Camera is available (not opened by another process)\")\n", + " except Exception as e:\n", + " print(f\"⚠ Could not check if camera is opened: {e}\")\n", + " \n", + " # Try to initialize camera\n", + " try:\n", + " hCamera = mvsdk.CameraInit(DevInfo, -1, -1)\n", + " print(\"✓ Camera initialized successfully\")\n", + " camera_results.append((hCamera, \"AVAILABLE\"))\n", + " \n", + " # Close the camera after testing\n", + " try:\n", + " mvsdk.CameraUnInit(hCamera)\n", + " print(\"✓ Camera closed after testing\")\n", + " except Exception as e:\n", + " print(f\"⚠ Warning: Could not close camera: {e}\")\n", + " \n", + " except mvsdk.CameraException as e:\n", + " print(f\"✗ Camera initialization failed: {e.error_code} - {e.message}\")\n", + " \n", + " # Analyze specific error codes\n", + " if e.error_code == mvsdk.CAMERA_STATUS_DEVICE_IS_OPENED:\n", + " camera_results.append((DevInfo, \"DEVICE_OPENED\"))\n", + " elif e.error_code == mvsdk.CAMERA_STATUS_ACCESS_DENY:\n", + " camera_results.append((DevInfo, \"ACCESS_DENIED\"))\n", + " elif e.error_code == mvsdk.CAMERA_STATUS_DEVICE_LOST:\n", + " camera_results.append((DevInfo, \"DEVICE_LOST\"))\n", + " else:\n", + " camera_results.append((DevInfo, f\"INIT_ERROR_{e.error_code}\"))\n", + " \n", + " except Exception as e:\n", + " print(f\"✗ Unexpected error during initialization: {e}\")\n", + " camera_results.append((DevInfo, \"UNEXPECTED_ERROR\"))\n", + " \n", + " return camera_results\n", + "\n", + "# Test the function\n", + "camera_results = check_camera_availability()\n", + "print(f\"\\nResults for {len(camera_results)} cameras:\")\n", + "for i, (camera_info, status) in enumerate(camera_results):\n", + " if hasattr(camera_info, 'GetFriendlyName'):\n", + " name = camera_info.GetFriendlyName()\n", + " else:\n", + " name = f\"Camera {i}\"\n", + " print(f\" {name}: {status}\")" + ] + }, + { + "cell_type": "code", + "execution_count": 29, + "id": "test-capture-availability", + "metadata": {}, + "outputs": [ + { + "name": "stdout", + "output_type": "stream", + "text": [ + "\n", + "Testing capture readiness for 2 available camera(s):\n", + "\n", + "Testing camera 0 capture readiness...\n", + "\n", + "Testing Camera Capture Readiness\n", + "===================================\n", + "✓ Camera capabilities retrieved\n", + "✓ Camera type: Color\n", + "✓ Basic camera configuration set\n", + "✓ Camera started\n", + "✓ Frame buffer allocated\n", + "\n", + "Testing image capture...\n", + "✓ Image captured successfully: 1280x1024\n", + "✓ Image processed and buffer released\n", + "✓ Cleanup completed\n", + "Capture Ready for Blower-Yield-Cam: True\n", + "\n", + "Testing camera 1 capture readiness...\n", + "\n", + "Testing Camera Capture Readiness\n", + "===================================\n", + "✓ Camera capabilities retrieved\n", + "✓ Camera type: Color\n", + "✓ Basic camera configuration set\n", + "✓ Camera started\n", + "✓ Frame buffer allocated\n", + "\n", + "Testing image capture...\n", + "✓ Image captured successfully: 1280x1024\n", + "✓ Image processed and buffer released\n", + "✓ Cleanup completed\n", + "Capture Ready for Cracker-Cam: True\n" + ] + } + ], + "source": [ + "def test_camera_capture_readiness(hCamera):\n", + " \"\"\"\n", + " Test if camera is ready for image capture\n", + " \"\"\"\n", + " if not isinstance(hCamera, int):\n", + " print(\"Camera not properly initialized, skipping capture test\")\n", + " return False\n", + " \n", + " print(\"\\nTesting Camera Capture Readiness\")\n", + " print(\"=\" * 35)\n", + " \n", + " try:\n", + " # Get camera capabilities\n", + " cap = mvsdk.CameraGetCapability(hCamera)\n", + " print(\"✓ Camera capabilities retrieved\")\n", + " \n", + " # Check camera type\n", + " monoCamera = (cap.sIspCapacity.bMonoSensor != 0)\n", + " print(f\"✓ Camera type: {'Monochrome' if monoCamera else 'Color'}\")\n", + " \n", + " # Set basic configuration\n", + " if monoCamera:\n", + " mvsdk.CameraSetIspOutFormat(hCamera, mvsdk.CAMERA_MEDIA_TYPE_MONO8)\n", + " else:\n", + " mvsdk.CameraSetIspOutFormat(hCamera, mvsdk.CAMERA_MEDIA_TYPE_BGR8)\n", + " \n", + " mvsdk.CameraSetTriggerMode(hCamera, 0) # Continuous mode\n", + " mvsdk.CameraSetAeState(hCamera, 0) # Manual exposure\n", + " mvsdk.CameraSetExposureTime(hCamera, 5000) # 5ms exposure\n", + " print(\"✓ Basic camera configuration set\")\n", + " \n", + " # Start camera\n", + " mvsdk.CameraPlay(hCamera)\n", + " print(\"✓ Camera started\")\n", + " \n", + " # Allocate buffer\n", + " FrameBufferSize = cap.sResolutionRange.iWidthMax * cap.sResolutionRange.iHeightMax * (1 if monoCamera else 3)\n", + " pFrameBuffer = mvsdk.CameraAlignMalloc(FrameBufferSize, 16)\n", + " print(\"✓ Frame buffer allocated\")\n", + " \n", + " # Test image capture with short timeout\n", + " print(\"\\nTesting image capture...\")\n", + " try:\n", + " pRawData, FrameHead = mvsdk.CameraGetImageBuffer(hCamera, 1000) # 1 second timeout\n", + " print(f\"✓ Image captured successfully: {FrameHead.iWidth}x{FrameHead.iHeight}\")\n", + " \n", + " # Process and release\n", + " mvsdk.CameraImageProcess(hCamera, pRawData, pFrameBuffer, FrameHead)\n", + " mvsdk.CameraReleaseImageBuffer(hCamera, pRawData)\n", + " print(\"✓ Image processed and buffer released\")\n", + " \n", + " capture_success = True\n", + " \n", + " except mvsdk.CameraException as e:\n", + " print(f\"✗ Image capture failed: {e.error_code} - {e.message}\")\n", + " \n", + " if e.error_code == mvsdk.CAMERA_STATUS_TIME_OUT:\n", + " print(\" → Camera timeout - may be busy or not streaming\")\n", + " elif e.error_code == mvsdk.CAMERA_STATUS_DEVICE_LOST:\n", + " print(\" → Device lost - camera disconnected\")\n", + " elif e.error_code == mvsdk.CAMERA_STATUS_BUSY:\n", + " print(\" → Camera busy - may be used by another process\")\n", + " \n", + " capture_success = False\n", + " \n", + " # Cleanup\n", + " mvsdk.CameraAlignFree(pFrameBuffer)\n", + " print(\"✓ Cleanup completed\")\n", + " \n", + " return capture_success\n", + " \n", + " except Exception as e:\n", + " print(f\"✗ Capture readiness test failed: {e}\")\n", + " return False\n", + "\n", + "# Test capture readiness for available cameras\n", + "available_cameras = [(cam, stat) for cam, stat in camera_results if stat == \"AVAILABLE\"]\n", + "\n", + "if available_cameras:\n", + " print(f\"\\nTesting capture readiness for {len(available_cameras)} available camera(s):\")\n", + " for i, (camera_handle, status) in enumerate(available_cameras):\n", + " if hasattr(camera_handle, 'GetFriendlyName'):\n", + " # This shouldn't happen for AVAILABLE cameras, but just in case\n", + " print(f\"\\nCamera {i}: Invalid handle\")\n", + " continue\n", + " \n", + " print(f\"\\nTesting camera {i} capture readiness...\")\n", + " # Re-initialize the camera for testing since we closed it earlier\n", + " try:\n", + " # Find the camera info from the original results\n", + " DevList = mvsdk.CameraEnumerateDevice()\n", + " if i < len(DevList):\n", + " DevInfo = DevList[i]\n", + " hCamera = mvsdk.CameraInit(DevInfo, -1, -1)\n", + " capture_ready = test_camera_capture_readiness(hCamera)\n", + " print(f\"Capture Ready for {DevInfo.GetFriendlyName()}: {capture_ready}\")\n", + " mvsdk.CameraUnInit(hCamera)\n", + " else:\n", + " print(f\"Could not re-initialize camera {i}\")\n", + " except Exception as e:\n", + " print(f\"Error testing camera {i}: {e}\")\n", + "else:\n", + " print(\"\\nNo cameras are available for capture testing\")\n", + " print(\"Camera statuses:\")\n", + " for i, (cam_info, status) in enumerate(camera_results):\n", + " if hasattr(cam_info, 'GetFriendlyName'):\n", + " name = cam_info.GetFriendlyName()\n", + " else:\n", + " name = f\"Camera {i}\"\n", + " print(f\" {name}: {status}\")" + ] + }, + { + "cell_type": "code", + "execution_count": 30, + "id": "comprehensive-check", + "metadata": {}, + "outputs": [ + { + "name": "stdout", + "output_type": "stream", + "text": [ + "\n", + "==================================================\n", + "COMPREHENSIVE CAMERA CHECK\n", + "==================================================\n", + "Camera Availability Check\n", + "==============================\n", + "✓ SDK initialized successfully\n", + "✓ Found 2 camera(s)\n", + " 0: Blower-Yield-Cam (192.168.1.165-192.168.1.54)\n", + " 1: Cracker-Cam (192.168.1.167-192.168.1.54)\n", + "\n", + "Testing camera 0: Blower-Yield-Cam\n", + "✓ Camera is available (not opened by another process)\n", + "✓ Camera initialized successfully\n", + "✓ Camera closed after testing\n", + "\n", + "Testing camera 1: Cracker-Cam\n", + "✓ Camera is available (not opened by another process)\n", + "✓ Camera initialized successfully\n", + "✓ Camera closed after testing\n", + "\n", + "==================================================\n", + "FINAL RESULTS:\n", + "Camera Available: False\n", + "Capture Ready: False\n", + "Status: (42, 'AVAILABLE')\n", + "==================================================\n" + ] + } + ], + "source": [ + "def comprehensive_camera_check():\n", + " \"\"\"\n", + " Complete camera availability and readiness check\n", + " Returns: (available, ready, handle_or_info, status_message)\n", + " \"\"\"\n", + " # Check availability\n", + " handle_or_info, status = check_camera_availability()\n", + " \n", + " available = status == \"AVAILABLE\"\n", + " ready = False\n", + " \n", + " if available:\n", + " # Test capture readiness\n", + " ready = test_camera_capture_readiness(handle_or_info)\n", + " \n", + " # Close camera after testing\n", + " try:\n", + " mvsdk.CameraUnInit(handle_or_info)\n", + " print(\"✓ Camera closed after testing\")\n", + " except:\n", + " pass\n", + " \n", + " return available, ready, handle_or_info, status\n", + "\n", + "# Run comprehensive check\n", + "print(\"\\n\" + \"=\" * 50)\n", + "print(\"COMPREHENSIVE CAMERA CHECK\")\n", + "print(\"=\" * 50)\n", + "\n", + "available, ready, info, status_msg = comprehensive_camera_check()\n", + "\n", + "print(\"\\n\" + \"=\" * 50)\n", + "print(\"FINAL RESULTS:\")\n", + "print(f\"Camera Available: {available}\")\n", + "print(f\"Capture Ready: {ready}\")\n", + "print(f\"Status: {status_msg}\")\n", + "print(\"=\" * 50)" + ] + }, + { + "cell_type": "code", + "execution_count": 31, + "id": "status-check-function", + "metadata": {}, + "outputs": [ + { + "name": "stdout", + "output_type": "stream", + "text": [ + "\n", + "Testing Simple Camera Ready Check:\n", + "========================================\n", + "Ready: True\n", + "Message: Camera 'Blower-Yield-Cam' is ready for capture\n", + "Camera: Blower-Yield-Cam\n" + ] + } + ], + "source": [ + "def is_camera_ready_for_capture():\n", + " \"\"\"\n", + " Simple function to check if camera is ready for capture.\n", + " Returns: (ready: bool, message: str, camera_info: object or None)\n", + " \n", + " This is the function you can use in your main capture script.\n", + " \"\"\"\n", + " try:\n", + " # Initialize SDK\n", + " mvsdk.CameraSdkInit(1)\n", + " \n", + " # Enumerate cameras\n", + " DevList = mvsdk.CameraEnumerateDevice()\n", + " if len(DevList) < 1:\n", + " return False, \"No cameras found\", None\n", + " \n", + " DevInfo = DevList[0]\n", + " \n", + " # Check if already opened\n", + " try:\n", + " if mvsdk.CameraIsOpened(DevInfo):\n", + " return False, f\"Camera '{DevInfo.GetFriendlyName()}' is already opened by another process\", DevInfo\n", + " except:\n", + " pass # Some cameras might not support this check\n", + " \n", + " # Try to initialize\n", + " try:\n", + " hCamera = mvsdk.CameraInit(DevInfo, -1, -1)\n", + " \n", + " # Quick capture test\n", + " try:\n", + " # Basic setup\n", + " mvsdk.CameraSetTriggerMode(hCamera, 0)\n", + " mvsdk.CameraPlay(hCamera)\n", + " \n", + " # Try to get one frame with short timeout\n", + " pRawData, FrameHead = mvsdk.CameraGetImageBuffer(hCamera, 500) # 0.5 second timeout\n", + " mvsdk.CameraReleaseImageBuffer(hCamera, pRawData)\n", + " \n", + " # Success - close and return\n", + " mvsdk.CameraUnInit(hCamera)\n", + " return True, f\"Camera '{DevInfo.GetFriendlyName()}' is ready for capture\", DevInfo\n", + " \n", + " except mvsdk.CameraException as e:\n", + " mvsdk.CameraUnInit(hCamera)\n", + " if e.error_code == mvsdk.CAMERA_STATUS_TIME_OUT:\n", + " return False, \"Camera timeout - may be busy or not streaming properly\", DevInfo\n", + " else:\n", + " return False, f\"Camera capture test failed: {e.message}\", DevInfo\n", + " \n", + " except mvsdk.CameraException as e:\n", + " if e.error_code == mvsdk.CAMERA_STATUS_DEVICE_IS_OPENED:\n", + " return False, f\"Camera '{DevInfo.GetFriendlyName()}' is already in use\", DevInfo\n", + " elif e.error_code == mvsdk.CAMERA_STATUS_ACCESS_DENY:\n", + " return False, f\"Access denied to camera '{DevInfo.GetFriendlyName()}'\", DevInfo\n", + " else:\n", + " return False, f\"Camera initialization failed: {e.message}\", DevInfo\n", + " \n", + " except Exception as e:\n", + " return False, f\"Camera check failed: {str(e)}\", None\n", + "\n", + "# Test the simple function\n", + "print(\"\\nTesting Simple Camera Ready Check:\")\n", + "print(\"=\" * 40)\n", + "\n", + "ready, message, camera_info = is_camera_ready_for_capture()\n", + "print(f\"Ready: {ready}\")\n", + "print(f\"Message: {message}\")\n", + "if camera_info:\n", + " print(f\"Camera: {camera_info.GetFriendlyName()}\")" + ] + }, + { + "cell_type": "markdown", + "id": "usage-example", + "metadata": {}, + "source": [ + "## Usage Example\n", + "\n", + "Here's how you can integrate the camera status check into your capture script:\n", + "\n", + "```python\n", + "# Before attempting to capture images\n", + "ready, message, camera_info = is_camera_ready_for_capture()\n", + "\n", + "if not ready:\n", + " print(f\"Camera not ready: {message}\")\n", + " # Handle the error appropriately\n", + " return False\n", + "\n", + "print(f\"Camera ready: {message}\")\n", + "# Proceed with normal capture logic\n", + "```\n", + "\n", + "## Key Findings\n", + "\n", + "1. **`CameraIsOpened()`** - Checks if camera is opened by another process\n", + "2. **`CameraInit()` error codes** - Provide specific failure reasons\n", + "3. **Quick capture test** - Verifies camera is actually streaming\n", + "4. **Timeout handling** - Detects if camera is busy/unresponsive\n", + "\n", + "The most reliable approach is to:\n", + "1. Check if camera exists\n", + "2. Check if it's already opened\n", + "3. Try to initialize it\n", + "4. Test actual image capture with short timeout\n", + "5. Clean up properly" + ] + } + ], + "metadata": { + "kernelspec": { + "display_name": "USDA-vision-cameras", + "language": "python", + "name": "python3" + }, + "language_info": { + "codemirror_mode": { + "name": "ipython", + "version": 3 + }, + "file_extension": ".py", + "mimetype": "text/x-python", + "name": "python", + "nbconvert_exporter": "python", + "pygments_lexer": "ipython3", + "version": "3.11.2" + } + }, + "nbformat": 4, + "nbformat_minor": 5 +} diff --git a/api/ai_agent/examples/notebooks/camera_test_setup.ipynb b/api/ai_agent/examples/notebooks/camera_test_setup.ipynb new file mode 100644 index 0000000..8c91de7 --- /dev/null +++ b/api/ai_agent/examples/notebooks/camera_test_setup.ipynb @@ -0,0 +1,495 @@ +{ + "cells": [ + { + "cell_type": "markdown", + "metadata": {}, + "source": [ + "# GigE Camera Test Setup\n", + "\n", + "This notebook helps you test and configure your GigE cameras for the USDA vision project.\n", + "\n", + "## Key Features:\n", + "- Test camera connectivity\n", + "- Display images inline (no GUI needed)\n", + "- Save test images/videos to `/storage`\n", + "- Configure camera parameters\n", + "- Test recording functionality" + ] + }, + { + "cell_type": "code", + "execution_count": 1, + "metadata": {}, + "outputs": [ + { + "name": "stdout", + "output_type": "stream", + "text": [ + "✅ All imports successful!\n", + "OpenCV version: 4.11.0\n", + "NumPy version: 2.3.2\n" + ] + } + ], + "source": [ + "import cv2\n", + "import numpy as np\n", + "import matplotlib.pyplot as plt\n", + "import os\n", + "from datetime import datetime\n", + "import time\n", + "from pathlib import Path\n", + "import imageio\n", + "from tqdm import tqdm\n", + "\n", + "# Configure matplotlib for inline display\n", + "plt.rcParams['figure.figsize'] = (12, 8)\n", + "plt.rcParams['image.cmap'] = 'gray'\n", + "\n", + "print(\"✅ All imports successful!\")\n", + "print(f\"OpenCV version: {cv2.__version__}\")\n", + "print(f\"NumPy version: {np.__version__}\")" + ] + }, + { + "cell_type": "markdown", + "metadata": {}, + "source": [ + "## Utility Functions" + ] + }, + { + "cell_type": "code", + "execution_count": 2, + "metadata": {}, + "outputs": [ + { + "name": "stdout", + "output_type": "stream", + "text": [ + "✅ Utility functions loaded!\n" + ] + } + ], + "source": [ + "def display_image(image, title=\"Image\", figsize=(10, 8)):\n", + " \"\"\"Display image inline in Jupyter notebook\"\"\"\n", + " plt.figure(figsize=figsize)\n", + " if len(image.shape) == 3:\n", + " # Convert BGR to RGB for matplotlib\n", + " image_rgb = cv2.cvtColor(image, cv2.COLOR_BGR2RGB)\n", + " plt.imshow(image_rgb)\n", + " else:\n", + " plt.imshow(image, cmap='gray')\n", + " plt.title(title)\n", + " plt.axis('off')\n", + " plt.tight_layout()\n", + " plt.show()\n", + "\n", + "def save_image_to_storage(image, filename_prefix=\"test_image\"):\n", + " \"\"\"Save image to /storage with timestamp\"\"\"\n", + " timestamp = datetime.now().strftime(\"%Y%m%d_%H%M%S\")\n", + " filename = f\"{filename_prefix}_{timestamp}.jpg\"\n", + " filepath = f\"/storage/{filename}\"\n", + " \n", + " success = cv2.imwrite(filepath, image)\n", + " if success:\n", + " print(f\"✅ Image saved: {filepath}\")\n", + " return filepath\n", + " else:\n", + " print(f\"❌ Failed to save image: {filepath}\")\n", + " return None\n", + "\n", + "def create_storage_subdir(subdir_name):\n", + " \"\"\"Create subdirectory in /storage\"\"\"\n", + " path = Path(f\"/storage/{subdir_name}\")\n", + " path.mkdir(exist_ok=True)\n", + " print(f\"📁 Directory ready: {path}\")\n", + " return str(path)\n", + "\n", + "def list_available_cameras():\n", + " \"\"\"List all available camera devices\"\"\"\n", + " print(\"🔍 Scanning for available cameras...\")\n", + " available_cameras = []\n", + " \n", + " # Test camera indices 0-10\n", + " for i in range(11):\n", + " cap = cv2.VideoCapture(i)\n", + " if cap.isOpened():\n", + " ret, frame = cap.read()\n", + " if ret:\n", + " available_cameras.append(i)\n", + " print(f\"📷 Camera {i}: Available (Resolution: {frame.shape[1]}x{frame.shape[0]})\")\n", + " cap.release()\n", + " else:\n", + " # Try with different backends for GigE cameras\n", + " cap = cv2.VideoCapture(i, cv2.CAP_GSTREAMER)\n", + " if cap.isOpened():\n", + " ret, frame = cap.read()\n", + " if ret:\n", + " available_cameras.append(i)\n", + " print(f\"📷 Camera {i}: Available via GStreamer (Resolution: {frame.shape[1]}x{frame.shape[0]})\")\n", + " cap.release()\n", + " \n", + " if not available_cameras:\n", + " print(\"❌ No cameras found\")\n", + " \n", + " return available_cameras\n", + "\n", + "print(\"✅ Utility functions loaded!\")" + ] + }, + { + "cell_type": "markdown", + "metadata": {}, + "source": [ + "## Step 1: Check Storage Directory" + ] + }, + { + "cell_type": "code", + "execution_count": 3, + "metadata": {}, + "outputs": [ + { + "name": "stdout", + "output_type": "stream", + "text": [ + "Storage directory exists: True\n", + "Storage directory writable: True\n", + "📁 Directory ready: /storage/test_images\n", + "📁 Directory ready: /storage/test_videos\n", + "📁 Directory ready: /storage/camera1\n", + "📁 Directory ready: /storage/camera2\n" + ] + } + ], + "source": [ + "# Check storage directory\n", + "storage_path = Path(\"/storage\")\n", + "print(f\"Storage directory exists: {storage_path.exists()}\")\n", + "print(f\"Storage directory writable: {os.access('/storage', os.W_OK)}\")\n", + "\n", + "# Create test subdirectories\n", + "test_images_dir = create_storage_subdir(\"test_images\")\n", + "test_videos_dir = create_storage_subdir(\"test_videos\")\n", + "camera1_dir = create_storage_subdir(\"camera1\")\n", + "camera2_dir = create_storage_subdir(\"camera2\")" + ] + }, + { + "cell_type": "markdown", + "metadata": {}, + "source": [ + "## Step 2: Scan for Available Cameras" + ] + }, + { + "cell_type": "code", + "execution_count": 4, + "metadata": {}, + "outputs": [ + { + "name": "stdout", + "output_type": "stream", + "text": [ + "🔍 Scanning for available cameras...\n", + "❌ No cameras found\n", + "\n", + "📊 Summary: Found 0 camera(s): []\n" + ] + }, + { + "name": "stderr", + "output_type": "stream", + "text": [ + "[ WARN:0@9.977] global cap_v4l.cpp:913 open VIDEOIO(V4L2:/dev/video0): can't open camera by index\n", + "[ WARN:0@9.977] global obsensor_stream_channel_v4l2.cpp:82 xioctl ioctl: fd=-1, req=-2140645888\n", + "[ WARN:0@9.977] global obsensor_stream_channel_v4l2.cpp:138 queryUvcDeviceInfoList ioctl error return: 9\n", + "[ WARN:0@9.977] global obsensor_stream_channel_v4l2.cpp:82 xioctl ioctl: fd=-1, req=-2140645888\n", + "[ WARN:0@9.977] global obsensor_stream_channel_v4l2.cpp:138 queryUvcDeviceInfoList ioctl error return: 9\n", + "[ERROR:0@9.977] global obsensor_uvc_stream_channel.cpp:158 getStreamChannelGroup Camera index out of range\n", + "[ WARN:0@9.977] global cap_v4l.cpp:913 open VIDEOIO(V4L2:/dev/video1): can't open camera by index\n", + "[ WARN:0@9.977] global obsensor_stream_channel_v4l2.cpp:82 xioctl ioctl: fd=-1, req=-2140645888\n", + "[ WARN:0@9.977] global obsensor_stream_channel_v4l2.cpp:138 queryUvcDeviceInfoList ioctl error return: 9\n", + "[ WARN:0@9.977] global obsensor_stream_channel_v4l2.cpp:82 xioctl ioctl: fd=-1, req=-2140645888\n", + "[ WARN:0@9.977] global obsensor_stream_channel_v4l2.cpp:138 queryUvcDeviceInfoList ioctl error return: 9\n", + "[ERROR:0@9.977] global obsensor_uvc_stream_channel.cpp:158 getStreamChannelGroup Camera index out of range\n", + "[ WARN:0@9.977] global cap_v4l.cpp:913 open VIDEOIO(V4L2:/dev/video2): can't open camera by index\n", + "[ WARN:0@9.977] global obsensor_stream_channel_v4l2.cpp:82 xioctl ioctl: fd=-1, req=-2140645888\n", + "[ WARN:0@9.977] global obsensor_stream_channel_v4l2.cpp:138 queryUvcDeviceInfoList ioctl error return: 9\n", + "[ WARN:0@9.977] global obsensor_stream_channel_v4l2.cpp:82 xioctl ioctl: fd=-1, req=-2140645888\n", + "[ WARN:0@9.977] global obsensor_stream_channel_v4l2.cpp:138 queryUvcDeviceInfoList ioctl error return: 9\n", + "[ERROR:0@9.977] global obsensor_uvc_stream_channel.cpp:158 getStreamChannelGroup Camera index out of range\n", + "[ WARN:0@9.977] global cap_v4l.cpp:913 open VIDEOIO(V4L2:/dev/video3): can't open camera by index\n", + "[ WARN:0@9.977] global obsensor_stream_channel_v4l2.cpp:82 xioctl ioctl: fd=-1, req=-2140645888\n", + "[ WARN:0@9.977] global obsensor_stream_channel_v4l2.cpp:138 queryUvcDeviceInfoList ioctl error return: 9\n", + "[ WARN:0@9.977] global obsensor_stream_channel_v4l2.cpp:82 xioctl ioctl: fd=-1, req=-2140645888\n", + "[ WARN:0@9.977] global obsensor_stream_channel_v4l2.cpp:138 queryUvcDeviceInfoList ioctl error return: 9\n", + "[ERROR:0@9.977] global obsensor_uvc_stream_channel.cpp:158 getStreamChannelGroup Camera index out of range\n", + "[ WARN:0@9.977] global cap_v4l.cpp:913 open VIDEOIO(V4L2:/dev/video4): can't open camera by index\n", + "[ WARN:0@9.978] global obsensor_stream_channel_v4l2.cpp:82 xioctl ioctl: fd=-1, req=-2140645888\n", + "[ WARN:0@9.978] global obsensor_stream_channel_v4l2.cpp:138 queryUvcDeviceInfoList ioctl error return: 9\n", + "[ WARN:0@9.978] global obsensor_stream_channel_v4l2.cpp:82 xioctl ioctl: fd=-1, req=-2140645888\n", + "[ WARN:0@9.978] global obsensor_stream_channel_v4l2.cpp:138 queryUvcDeviceInfoList ioctl error return: 9\n", + "[ERROR:0@9.978] global obsensor_uvc_stream_channel.cpp:158 getStreamChannelGroup Camera index out of range\n", + "[ WARN:0@9.978] global cap_v4l.cpp:913 open VIDEOIO(V4L2:/dev/video5): can't open camera by index\n", + "[ WARN:0@9.978] global obsensor_stream_channel_v4l2.cpp:82 xioctl ioctl: fd=-1, req=-2140645888\n", + "[ WARN:0@9.978] global obsensor_stream_channel_v4l2.cpp:138 queryUvcDeviceInfoList ioctl error return: 9\n", + "[ WARN:0@9.978] global obsensor_stream_channel_v4l2.cpp:82 xioctl ioctl: fd=-1, req=-2140645888\n", + "[ WARN:0@9.978] global obsensor_stream_channel_v4l2.cpp:138 queryUvcDeviceInfoList ioctl error return: 9\n", + "[ERROR:0@9.978] global obsensor_uvc_stream_channel.cpp:158 getStreamChannelGroup Camera index out of range\n", + "[ WARN:0@9.978] global cap_v4l.cpp:913 open VIDEOIO(V4L2:/dev/video6): can't open camera by index\n", + "[ WARN:0@9.978] global obsensor_stream_channel_v4l2.cpp:82 xioctl ioctl: fd=-1, req=-2140645888\n", + "[ WARN:0@9.978] global obsensor_stream_channel_v4l2.cpp:138 queryUvcDeviceInfoList ioctl error return: 9\n", + "[ WARN:0@9.978] global obsensor_stream_channel_v4l2.cpp:82 xioctl ioctl: fd=-1, req=-2140645888\n", + "[ WARN:0@9.978] global obsensor_stream_channel_v4l2.cpp:138 queryUvcDeviceInfoList ioctl error return: 9\n", + "[ERROR:0@9.978] global obsensor_uvc_stream_channel.cpp:158 getStreamChannelGroup Camera index out of range\n", + "[ WARN:0@9.978] global cap_v4l.cpp:913 open VIDEOIO(V4L2:/dev/video7): can't open camera by index\n", + "[ WARN:0@9.978] global obsensor_stream_channel_v4l2.cpp:82 xioctl ioctl: fd=-1, req=-2140645888\n", + "[ WARN:0@9.978] global obsensor_stream_channel_v4l2.cpp:138 queryUvcDeviceInfoList ioctl error return: 9\n", + "[ WARN:0@9.978] global obsensor_stream_channel_v4l2.cpp:82 xioctl ioctl: fd=-1, req=-2140645888\n", + "[ WARN:0@9.978] global obsensor_stream_channel_v4l2.cpp:138 queryUvcDeviceInfoList ioctl error return: 9\n", + "[ERROR:0@9.978] global obsensor_uvc_stream_channel.cpp:158 getStreamChannelGroup Camera index out of range\n", + "[ WARN:0@9.978] global cap_v4l.cpp:913 open VIDEOIO(V4L2:/dev/video8): can't open camera by index\n", + "[ WARN:0@9.978] global obsensor_stream_channel_v4l2.cpp:82 xioctl ioctl: fd=-1, req=-2140645888\n", + "[ WARN:0@9.978] global obsensor_stream_channel_v4l2.cpp:138 queryUvcDeviceInfoList ioctl error return: 9\n", + "[ WARN:0@9.978] global obsensor_stream_channel_v4l2.cpp:82 xioctl ioctl: fd=-1, req=-2140645888\n", + "[ WARN:0@9.978] global obsensor_stream_channel_v4l2.cpp:138 queryUvcDeviceInfoList ioctl error return: 9\n", + "[ERROR:0@9.978] global obsensor_uvc_stream_channel.cpp:158 getStreamChannelGroup Camera index out of range\n", + "[ WARN:0@9.978] global cap_v4l.cpp:913 open VIDEOIO(V4L2:/dev/video9): can't open camera by index\n", + "[ WARN:0@9.978] global obsensor_stream_channel_v4l2.cpp:82 xioctl ioctl: fd=-1, req=-2140645888\n", + "[ WARN:0@9.978] global obsensor_stream_channel_v4l2.cpp:138 queryUvcDeviceInfoList ioctl error return: 9\n", + "[ WARN:0@9.978] global obsensor_stream_channel_v4l2.cpp:82 xioctl ioctl: fd=-1, req=-2140645888\n", + "[ WARN:0@9.978] global obsensor_stream_channel_v4l2.cpp:138 queryUvcDeviceInfoList ioctl error return: 9\n", + "[ERROR:0@9.978] global obsensor_uvc_stream_channel.cpp:158 getStreamChannelGroup Camera index out of range\n", + "[ WARN:0@9.978] global cap_v4l.cpp:913 open VIDEOIO(V4L2:/dev/video10): can't open camera by index\n", + "[ WARN:0@9.978] global obsensor_stream_channel_v4l2.cpp:82 xioctl ioctl: fd=-1, req=-2140645888\n", + "[ WARN:0@9.978] global obsensor_stream_channel_v4l2.cpp:138 queryUvcDeviceInfoList ioctl error return: 9\n", + "[ WARN:0@9.979] global obsensor_stream_channel_v4l2.cpp:82 xioctl ioctl: fd=-1, req=-2140645888\n", + "[ WARN:0@9.979] global obsensor_stream_channel_v4l2.cpp:138 queryUvcDeviceInfoList ioctl error return: 9\n", + "[ERROR:0@9.979] global obsensor_uvc_stream_channel.cpp:158 getStreamChannelGroup Camera index out of range\n" + ] + } + ], + "source": [ + "# Scan for cameras\n", + "cameras = list_available_cameras()\n", + "print(f\"\\n📊 Summary: Found {len(cameras)} camera(s): {cameras}\")" + ] + }, + { + "cell_type": "markdown", + "metadata": {}, + "source": [ + "## Step 3: Test Individual Camera" + ] + }, + { + "cell_type": "code", + "execution_count": 5, + "metadata": {}, + "outputs": [ + { + "name": "stdout", + "output_type": "stream", + "text": [ + "🔧 Testing camera 0...\n", + " Trying Default backend...\n", + " ❌ Default backend failed to open\n", + " Trying GStreamer backend...\n", + " ❌ GStreamer backend failed to open\n", + " Trying V4L2 backend...\n", + " ❌ V4L2 backend failed to open\n", + " Trying FFmpeg backend...\n", + " ❌ FFmpeg backend failed to open\n", + "❌ Camera 0 not accessible with any backend\n" + ] + }, + { + "name": "stderr", + "output_type": "stream", + "text": [ + "[ WARN:0@27.995] global cap_v4l.cpp:913 open VIDEOIO(V4L2:/dev/video0): can't open camera by index\n", + "[ WARN:0@27.995] global obsensor_stream_channel_v4l2.cpp:82 xioctl ioctl: fd=-1, req=-2140645888\n", + "[ WARN:0@27.995] global obsensor_stream_channel_v4l2.cpp:138 queryUvcDeviceInfoList ioctl error return: 9\n", + "[ WARN:0@27.995] global obsensor_stream_channel_v4l2.cpp:82 xioctl ioctl: fd=-1, req=-2140645888\n", + "[ WARN:0@27.995] global obsensor_stream_channel_v4l2.cpp:138 queryUvcDeviceInfoList ioctl error return: 9\n", + "[ERROR:0@27.995] global obsensor_uvc_stream_channel.cpp:158 getStreamChannelGroup Camera index out of range\n", + "[ WARN:0@27.996] global cap_v4l.cpp:913 open VIDEOIO(V4L2:/dev/video0): can't open camera by index\n", + "[ WARN:0@27.996] global cap.cpp:478 open VIDEOIO(V4L2): backend is generally available but can't be used to capture by index\n", + "[ WARN:0@27.996] global cap.cpp:478 open VIDEOIO(FFMPEG): backend is generally available but can't be used to capture by index\n" + ] + } + ], + "source": [ + "# Test a specific camera (change camera_id as needed)\n", + "camera_id = 0 # Change this to test different cameras\n", + "\n", + "print(f\"🔧 Testing camera {camera_id}...\")\n", + "\n", + "# Try different backends for GigE cameras\n", + "backends_to_try = [\n", + " (cv2.CAP_ANY, \"Default\"),\n", + " (cv2.CAP_GSTREAMER, \"GStreamer\"),\n", + " (cv2.CAP_V4L2, \"V4L2\"),\n", + " (cv2.CAP_FFMPEG, \"FFmpeg\")\n", + "]\n", + "\n", + "successful_backend = None\n", + "cap = None\n", + "\n", + "for backend, name in backends_to_try:\n", + " print(f\" Trying {name} backend...\")\n", + " cap = cv2.VideoCapture(camera_id, backend)\n", + " if cap.isOpened():\n", + " ret, frame = cap.read()\n", + " if ret:\n", + " print(f\" ✅ {name} backend works!\")\n", + " successful_backend = (backend, name)\n", + " break\n", + " else:\n", + " print(f\" ❌ {name} backend opened but can't read frames\")\n", + " else:\n", + " print(f\" ❌ {name} backend failed to open\")\n", + " cap.release()\n", + "\n", + "if successful_backend:\n", + " backend, backend_name = successful_backend\n", + " cap = cv2.VideoCapture(camera_id, backend)\n", + " \n", + " # Get camera properties\n", + " width = int(cap.get(cv2.CAP_PROP_FRAME_WIDTH))\n", + " height = int(cap.get(cv2.CAP_PROP_FRAME_HEIGHT))\n", + " fps = cap.get(cv2.CAP_PROP_FPS)\n", + " \n", + " print(f\"\\n📷 Camera {camera_id} Properties ({backend_name}):\")\n", + " print(f\" Resolution: {width}x{height}\")\n", + " print(f\" FPS: {fps}\")\n", + " \n", + " # Capture a test frame\n", + " ret, frame = cap.read()\n", + " if ret:\n", + " print(f\" Frame shape: {frame.shape}\")\n", + " print(f\" Frame dtype: {frame.dtype}\")\n", + " \n", + " # Display the frame\n", + " display_image(frame, f\"Camera {camera_id} Test Frame\")\n", + " \n", + " # Save test image\n", + " save_image_to_storage(frame, f\"camera_{camera_id}_test\")\n", + " else:\n", + " print(\" ❌ Failed to capture frame\")\n", + " \n", + " cap.release()\n", + "else:\n", + " print(f\"❌ Camera {camera_id} not accessible with any backend\")" + ] + }, + { + "cell_type": "markdown", + "metadata": {}, + "source": [ + "## Step 4: Test Video Recording" + ] + }, + { + "cell_type": "code", + "execution_count": null, + "metadata": {}, + "outputs": [], + "source": [ + "# Test video recording\n", + "def test_video_recording(camera_id, duration_seconds=5, fps=30):\n", + " \"\"\"Test video recording from camera\"\"\"\n", + " print(f\"🎥 Testing video recording from camera {camera_id} for {duration_seconds} seconds...\")\n", + " \n", + " # Open camera\n", + " cap = cv2.VideoCapture(camera_id)\n", + " if not cap.isOpened():\n", + " print(f\"❌ Cannot open camera {camera_id}\")\n", + " return None\n", + " \n", + " # Get camera properties\n", + " width = int(cap.get(cv2.CAP_PROP_FRAME_WIDTH))\n", + " height = int(cap.get(cv2.CAP_PROP_FRAME_HEIGHT))\n", + " \n", + " # Create video writer\n", + " timestamp = datetime.now().strftime(\"%Y%m%d_%H%M%S\")\n", + " video_filename = f\"/storage/test_videos/camera_{camera_id}_test_{timestamp}.mp4\"\n", + " \n", + " fourcc = cv2.VideoWriter_fourcc(*'mp4v')\n", + " out = cv2.VideoWriter(video_filename, fourcc, fps, (width, height))\n", + " \n", + " if not out.isOpened():\n", + " print(\"❌ Cannot create video writer\")\n", + " cap.release()\n", + " return None\n", + " \n", + " # Record video\n", + " frames_to_capture = duration_seconds * fps\n", + " frames_captured = 0\n", + " \n", + " print(f\"Recording {frames_to_capture} frames...\")\n", + " \n", + " with tqdm(total=frames_to_capture, desc=\"Recording\") as pbar:\n", + " start_time = time.time()\n", + " \n", + " while frames_captured < frames_to_capture:\n", + " ret, frame = cap.read()\n", + " if ret:\n", + " out.write(frame)\n", + " frames_captured += 1\n", + " pbar.update(1)\n", + " \n", + " # Display first frame\n", + " if frames_captured == 1:\n", + " display_image(frame, f\"First frame from camera {camera_id}\")\n", + " else:\n", + " print(f\"❌ Failed to read frame {frames_captured}\")\n", + " break\n", + " \n", + " # Cleanup\n", + " cap.release()\n", + " out.release()\n", + " \n", + " elapsed_time = time.time() - start_time\n", + " actual_fps = frames_captured / elapsed_time\n", + " \n", + " print(f\"✅ Video saved: {video_filename}\")\n", + " print(f\"📊 Captured {frames_captured} frames in {elapsed_time:.2f}s\")\n", + " print(f\"📊 Actual FPS: {actual_fps:.2f}\")\n", + " \n", + " return video_filename\n", + "\n", + "# Test recording (change camera_id as needed)\n", + "if cameras: # Only test if cameras were found\n", + " test_camera = cameras[0] # Use first available camera\n", + " video_file = test_video_recording(test_camera, duration_seconds=3)\n", + "else:\n", + " print(\"⚠️ No cameras available for video test\")" + ] + } + ], + "metadata": { + "kernelspec": { + "display_name": "USDA-vision-cameras", + "language": "python", + "name": "python3" + }, + "language_info": { + "codemirror_mode": { + "name": "ipython", + "version": 3 + }, + "file_extension": ".py", + "mimetype": "text/x-python", + "name": "python", + "nbconvert_exporter": "python", + "pygments_lexer": "ipython3", + "version": "3.11.2" + } + }, + "nbformat": 4, + "nbformat_minor": 4 +} diff --git a/api/ai_agent/examples/notebooks/exposure test.ipynb b/api/ai_agent/examples/notebooks/exposure test.ipynb new file mode 100644 index 0000000..467802d --- /dev/null +++ b/api/ai_agent/examples/notebooks/exposure test.ipynb @@ -0,0 +1,426 @@ +{ + "cells": [ + { + "cell_type": "code", + "execution_count": 25, + "id": "ba958c88", + "metadata": {}, + "outputs": [], + "source": [ + "# coding=utf-8\n", + "\"\"\"\n", + "Test script to help find optimal exposure settings for your GigE camera.\n", + "This script captures a single test image with different exposure settings.\n", + "\"\"\"\n", + "import sys\n", + "\n", + "sys.path.append(\"./python demo\")\n", + "import os\n", + "import mvsdk\n", + "import numpy as np\n", + "import cv2\n", + "import platform\n", + "from datetime import datetime\n", + "\n", + "# Add the python demo directory to path\n" + ] + }, + { + "cell_type": "code", + "execution_count": 26, + "id": "23f1dc49", + "metadata": {}, + "outputs": [], + "source": [ + "def test_exposure_settings():\n", + " \"\"\"\n", + " Test different exposure settings to find optimal values\n", + " \"\"\"\n", + " # Initialize SDK\n", + " try:\n", + " mvsdk.CameraSdkInit(1)\n", + " print(\"SDK initialized successfully\")\n", + " except Exception as e:\n", + " print(f\"SDK initialization failed: {e}\")\n", + " return False\n", + "\n", + " # Enumerate cameras\n", + " DevList = mvsdk.CameraEnumerateDevice()\n", + " nDev = len(DevList)\n", + "\n", + " if nDev < 1:\n", + " print(\"No camera was found!\")\n", + " return False\n", + "\n", + " print(f\"Found {nDev} camera(s):\")\n", + " for i, DevInfo in enumerate(DevList):\n", + " print(f\" {i}: {DevInfo.GetFriendlyName()} ({DevInfo.GetPortType()})\")\n", + "\n", + " # Use first camera\n", + " DevInfo = DevList[0]\n", + " print(f\"\\nSelected camera: {DevInfo.GetFriendlyName()}\")\n", + "\n", + " # Initialize camera\n", + " try:\n", + " hCamera = mvsdk.CameraInit(DevInfo, -1, -1)\n", + " print(\"Camera initialized successfully\")\n", + " except mvsdk.CameraException as e:\n", + " print(f\"CameraInit Failed({e.error_code}): {e.message}\")\n", + " return False\n", + "\n", + " try:\n", + " # Get camera capabilities\n", + " cap = mvsdk.CameraGetCapability(hCamera)\n", + " monoCamera = cap.sIspCapacity.bMonoSensor != 0\n", + " print(f\"Camera type: {'Monochrome' if monoCamera else 'Color'}\")\n", + "\n", + " # Get camera ranges\n", + " try:\n", + " exp_min, exp_max, exp_step = mvsdk.CameraGetExposureTimeRange(hCamera)\n", + " print(f\"Exposure time range: {exp_min:.1f} - {exp_max:.1f} μs\")\n", + "\n", + " gain_min, gain_max, gain_step = mvsdk.CameraGetAnalogGainXRange(hCamera)\n", + " print(f\"Analog gain range: {gain_min:.2f} - {gain_max:.2f}x\")\n", + "\n", + " print(\"whatever this is: \", mvsdk.CameraGetAnalogGainXRange(hCamera))\n", + " except Exception as e:\n", + " print(f\"Could not get camera ranges: {e}\")\n", + " exp_min, exp_max = 100, 100000\n", + " gain_min, gain_max = 1.0, 4.0\n", + "\n", + " # Set output format\n", + " if monoCamera:\n", + " mvsdk.CameraSetIspOutFormat(hCamera, mvsdk.CAMERA_MEDIA_TYPE_MONO8)\n", + " else:\n", + " mvsdk.CameraSetIspOutFormat(hCamera, mvsdk.CAMERA_MEDIA_TYPE_BGR8)\n", + "\n", + " # Set camera to continuous capture mode\n", + " mvsdk.CameraSetTriggerMode(hCamera, 0)\n", + " mvsdk.CameraSetAeState(hCamera, 0) # Disable auto exposure\n", + "\n", + " # Start camera\n", + " mvsdk.CameraPlay(hCamera)\n", + "\n", + " # Allocate frame buffer\n", + " FrameBufferSize = cap.sResolutionRange.iWidthMax * cap.sResolutionRange.iHeightMax * (1 if monoCamera else 3)\n", + " pFrameBuffer = mvsdk.CameraAlignMalloc(FrameBufferSize, 16)\n", + "\n", + " # Create test directory\n", + " if not os.path.exists(\"exposure_tests\"):\n", + " os.makedirs(\"exposure_tests\")\n", + "\n", + " print(\"\\nTesting different exposure settings...\")\n", + " print(\"=\" * 50)\n", + "\n", + " # Test different exposure times (in microseconds)\n", + " exposure_times = [100, 200, 500, 1000, 2000, 5000, 10000, 20000] # 0.5ms to 20ms\n", + " analog_gains = [2.5, 5.0, 10.0, 16.0] # Start with 1x gain\n", + "\n", + " test_count = 0\n", + " for exp_time in exposure_times:\n", + " for gain in analog_gains:\n", + " # Clamp values to valid ranges\n", + " exp_time = max(exp_min, min(exp_max, exp_time))\n", + " gain = max(gain_min, min(gain_max, gain))\n", + "\n", + " print(f\"\\nTest {test_count + 1}: Exposure={exp_time/1000:.1f}ms, Gain={gain:.1f}x\")\n", + "\n", + " # Set camera parameters\n", + " mvsdk.CameraSetExposureTime(hCamera, exp_time)\n", + " try:\n", + " mvsdk.CameraSetAnalogGainX(hCamera, gain)\n", + " except:\n", + " pass # Some cameras might not support this\n", + "\n", + " # Wait a moment for settings to take effect\n", + " import time\n", + "\n", + " time.sleep(0.1)\n", + "\n", + " # Capture image\n", + " try:\n", + " pRawData, FrameHead = mvsdk.CameraGetImageBuffer(hCamera, 2000)\n", + " mvsdk.CameraImageProcess(hCamera, pRawData, pFrameBuffer, FrameHead)\n", + " mvsdk.CameraReleaseImageBuffer(hCamera, pRawData)\n", + "\n", + " # Handle Windows image flip\n", + " if platform.system() == \"Windows\":\n", + " mvsdk.CameraFlipFrameBuffer(pFrameBuffer, FrameHead, 1)\n", + "\n", + " # Convert to numpy array\n", + " frame_data = (mvsdk.c_ubyte * FrameHead.uBytes).from_address(pFrameBuffer)\n", + " frame = np.frombuffer(frame_data, dtype=np.uint8)\n", + "\n", + " if FrameHead.uiMediaType == mvsdk.CAMERA_MEDIA_TYPE_MONO8:\n", + " frame = frame.reshape((FrameHead.iHeight, FrameHead.iWidth))\n", + " else:\n", + " frame = frame.reshape((FrameHead.iHeight, FrameHead.iWidth, 3))\n", + "\n", + " # Calculate image statistics\n", + " mean_brightness = np.mean(frame)\n", + " max_brightness = np.max(frame)\n", + "\n", + " # Save image\n", + " filename = f\"exposure_tests/test_{test_count+1:02d}_exp{exp_time/1000:.1f}ms_gain{gain:.1f}x.jpg\"\n", + " cv2.imwrite(filename, frame)\n", + "\n", + " # Provide feedback\n", + " status = \"\"\n", + " if mean_brightness < 50:\n", + " status = \"TOO DARK\"\n", + " elif mean_brightness > 200:\n", + " status = \"TOO BRIGHT\"\n", + " elif max_brightness >= 255:\n", + " status = \"OVEREXPOSED\"\n", + " else:\n", + " status = \"GOOD\"\n", + "\n", + " print(f\" → Saved: {filename}\")\n", + " print(f\" → Brightness: mean={mean_brightness:.1f}, max={max_brightness:.1f} [{status}]\")\n", + "\n", + " test_count += 1\n", + "\n", + " except mvsdk.CameraException as e:\n", + " print(f\" → Failed to capture: {e.message}\")\n", + "\n", + " print(f\"\\nCompleted {test_count} test captures!\")\n", + " print(\"Check the 'exposure_tests' directory to see the results.\")\n", + " print(\"\\nRecommendations:\")\n", + " print(\"- Look for images marked as 'GOOD' - these have optimal exposure\")\n", + " print(\"- If all images are 'TOO BRIGHT', try lower exposure times or gains\")\n", + " print(\"- If all images are 'TOO DARK', try higher exposure times or gains\")\n", + " print(\"- Avoid 'OVEREXPOSED' images as they have clipped highlights\")\n", + "\n", + " # Cleanup\n", + " mvsdk.CameraAlignFree(pFrameBuffer)\n", + "\n", + " finally:\n", + " # Close camera\n", + " mvsdk.CameraUnInit(hCamera)\n", + " print(\"\\nCamera closed\")\n", + "\n", + " return True" + ] + }, + { + "cell_type": "code", + "execution_count": 27, + "id": "2891b5bf", + "metadata": {}, + "outputs": [ + { + "name": "stdout", + "output_type": "stream", + "text": [ + "GigE Camera Exposure Test Script\n", + "========================================\n", + "This script will test different exposure settings and save sample images.\n", + "Use this to find the optimal settings for your lighting conditions.\n", + "\n", + "SDK initialized successfully\n", + "Found 2 camera(s):\n", + " 0: Blower-Yield-Cam (NET-100M-192.168.1.204)\n", + " 1: Cracker-Cam (NET-1000M-192.168.1.246)\n", + "\n", + "Selected camera: Blower-Yield-Cam\n", + "Camera initialized successfully\n", + "Camera type: Color\n", + "Exposure time range: 8.0 - 1048568.0 μs\n", + "Analog gain range: 2.50 - 16.50x\n", + "whatever this is: (2.5, 16.5, 0.5)\n", + "\n", + "Testing different exposure settings...\n", + "==================================================\n", + "\n", + "Test 1: Exposure=0.1ms, Gain=2.5x\n", + " → Saved: exposure_tests/test_01_exp0.1ms_gain2.5x.jpg\n", + " → Brightness: mean=94.1, max=255.0 [OVEREXPOSED]\n", + "\n", + "Test 2: Exposure=0.1ms, Gain=5.0x\n", + " → Saved: exposure_tests/test_02_exp0.1ms_gain5.0x.jpg\n", + " → Brightness: mean=13.7, max=173.0 [TOO DARK]\n", + "\n", + "Test 3: Exposure=0.1ms, Gain=10.0x\n", + " → Saved: exposure_tests/test_03_exp0.1ms_gain10.0x.jpg\n", + " → Brightness: mean=14.1, max=255.0 [TOO DARK]\n", + "\n", + "Test 4: Exposure=0.1ms, Gain=16.0x\n", + " → Saved: exposure_tests/test_04_exp0.1ms_gain16.0x.jpg\n", + " → Brightness: mean=18.2, max=255.0 [TOO DARK]\n", + "\n", + "Test 5: Exposure=0.2ms, Gain=2.5x\n", + " → Saved: exposure_tests/test_05_exp0.2ms_gain2.5x.jpg\n", + " → Brightness: mean=22.1, max=255.0 [TOO DARK]\n", + "\n", + "Test 6: Exposure=0.2ms, Gain=5.0x\n", + " → Saved: exposure_tests/test_06_exp0.2ms_gain5.0x.jpg\n", + " → Brightness: mean=19.5, max=255.0 [TOO DARK]\n", + "\n", + "Test 7: Exposure=0.2ms, Gain=10.0x\n", + " → Saved: exposure_tests/test_07_exp0.2ms_gain10.0x.jpg\n", + " → Brightness: mean=25.3, max=255.0 [TOO DARK]\n", + "\n", + "Test 8: Exposure=0.2ms, Gain=16.0x\n", + " → Saved: exposure_tests/test_08_exp0.2ms_gain16.0x.jpg\n", + " → Brightness: mean=36.6, max=255.0 [TOO DARK]\n", + "\n", + "Test 9: Exposure=0.5ms, Gain=2.5x\n", + " → Saved: exposure_tests/test_09_exp0.5ms_gain2.5x.jpg\n", + " → Brightness: mean=55.8, max=255.0 [OVEREXPOSED]\n", + "\n", + "Test 10: Exposure=0.5ms, Gain=5.0x\n", + " → Saved: exposure_tests/test_10_exp0.5ms_gain5.0x.jpg\n", + " → Brightness: mean=38.5, max=255.0 [TOO DARK]\n", + "\n", + "Test 11: Exposure=0.5ms, Gain=10.0x\n", + " → Saved: exposure_tests/test_11_exp0.5ms_gain10.0x.jpg\n", + " → Brightness: mean=60.2, max=255.0 [OVEREXPOSED]\n", + "\n", + "Test 12: Exposure=0.5ms, Gain=16.0x\n", + " → Saved: exposure_tests/test_12_exp0.5ms_gain16.0x.jpg\n", + " → Brightness: mean=99.3, max=255.0 [OVEREXPOSED]\n", + "\n", + "Test 13: Exposure=1.0ms, Gain=2.5x\n", + " → Saved: exposure_tests/test_13_exp1.0ms_gain2.5x.jpg\n", + " → Brightness: mean=121.1, max=255.0 [OVEREXPOSED]\n", + "\n", + "Test 14: Exposure=1.0ms, Gain=5.0x\n", + " → Saved: exposure_tests/test_14_exp1.0ms_gain5.0x.jpg\n", + " → Brightness: mean=68.8, max=255.0 [OVEREXPOSED]\n", + "\n", + "Test 15: Exposure=1.0ms, Gain=10.0x\n", + " → Saved: exposure_tests/test_15_exp1.0ms_gain10.0x.jpg\n", + " → Brightness: mean=109.6, max=255.0 [OVEREXPOSED]\n", + "\n", + "Test 16: Exposure=1.0ms, Gain=16.0x\n", + " → Saved: exposure_tests/test_16_exp1.0ms_gain16.0x.jpg\n", + " → Brightness: mean=148.7, max=255.0 [OVEREXPOSED]\n", + "\n", + "Test 17: Exposure=2.0ms, Gain=2.5x\n", + " → Saved: exposure_tests/test_17_exp2.0ms_gain2.5x.jpg\n", + " → Brightness: mean=171.9, max=255.0 [OVEREXPOSED]\n", + "\n", + "Test 18: Exposure=2.0ms, Gain=5.0x\n", + " → Saved: exposure_tests/test_18_exp2.0ms_gain5.0x.jpg\n", + " → Brightness: mean=117.9, max=255.0 [OVEREXPOSED]\n", + "\n", + "Test 19: Exposure=2.0ms, Gain=10.0x\n", + " → Saved: exposure_tests/test_19_exp2.0ms_gain10.0x.jpg\n", + " → Brightness: mean=159.0, max=255.0 [OVEREXPOSED]\n", + "\n", + "Test 20: Exposure=2.0ms, Gain=16.0x\n", + " → Saved: exposure_tests/test_20_exp2.0ms_gain16.0x.jpg\n", + " → Brightness: mean=195.7, max=255.0 [OVEREXPOSED]\n", + "\n", + "Test 21: Exposure=5.0ms, Gain=2.5x\n", + " → Saved: exposure_tests/test_21_exp5.0ms_gain2.5x.jpg\n", + " → Brightness: mean=214.6, max=255.0 [TOO BRIGHT]\n", + "\n", + "Test 22: Exposure=5.0ms, Gain=5.0x\n", + " → Saved: exposure_tests/test_22_exp5.0ms_gain5.0x.jpg\n", + " → Brightness: mean=180.2, max=255.0 [OVEREXPOSED]\n", + "\n", + "Test 23: Exposure=5.0ms, Gain=10.0x\n", + " → Saved: exposure_tests/test_23_exp5.0ms_gain10.0x.jpg\n", + " → Brightness: mean=214.6, max=255.0 [TOO BRIGHT]\n", + "\n", + "Test 24: Exposure=5.0ms, Gain=16.0x\n", + " → Saved: exposure_tests/test_24_exp5.0ms_gain16.0x.jpg\n", + " → Brightness: mean=239.6, max=255.0 [TOO BRIGHT]\n", + "\n", + "Test 25: Exposure=10.0ms, Gain=2.5x\n", + " → Saved: exposure_tests/test_25_exp10.0ms_gain2.5x.jpg\n", + " → Brightness: mean=247.5, max=255.0 [TOO BRIGHT]\n", + "\n", + "Test 26: Exposure=10.0ms, Gain=5.0x\n", + " → Saved: exposure_tests/test_26_exp10.0ms_gain5.0x.jpg\n", + " → Brightness: mean=252.4, max=255.0 [TOO BRIGHT]\n", + "\n", + "Test 27: Exposure=10.0ms, Gain=10.0x\n", + " → Saved: exposure_tests/test_27_exp10.0ms_gain10.0x.jpg\n", + " → Brightness: mean=218.9, max=255.0 [TOO BRIGHT]\n", + "\n", + "Test 28: Exposure=10.0ms, Gain=16.0x\n", + " → Saved: exposure_tests/test_28_exp10.0ms_gain16.0x.jpg\n", + " → Brightness: mean=250.8, max=255.0 [TOO BRIGHT]\n", + "\n", + "Test 29: Exposure=20.0ms, Gain=2.5x\n", + " → Saved: exposure_tests/test_29_exp20.0ms_gain2.5x.jpg\n", + " → Brightness: mean=252.4, max=255.0 [TOO BRIGHT]\n", + "\n", + "Test 30: Exposure=20.0ms, Gain=5.0x\n", + " → Saved: exposure_tests/test_30_exp20.0ms_gain5.0x.jpg\n", + " → Brightness: mean=244.4, max=255.0 [TOO BRIGHT]\n", + "\n", + "Test 31: Exposure=20.0ms, Gain=10.0x\n", + " → Saved: exposure_tests/test_31_exp20.0ms_gain10.0x.jpg\n", + " → Brightness: mean=251.5, max=255.0 [TOO BRIGHT]\n", + "\n", + "Test 32: Exposure=20.0ms, Gain=16.0x\n", + " → Saved: exposure_tests/test_32_exp20.0ms_gain16.0x.jpg\n", + " → Brightness: mean=253.4, max=255.0 [TOO BRIGHT]\n", + "\n", + "Completed 32 test captures!\n", + "Check the 'exposure_tests' directory to see the results.\n", + "\n", + "Recommendations:\n", + "- Look for images marked as 'GOOD' - these have optimal exposure\n", + "- If all images are 'TOO BRIGHT', try lower exposure times or gains\n", + "- If all images are 'TOO DARK', try higher exposure times or gains\n", + "- Avoid 'OVEREXPOSED' images as they have clipped highlights\n", + "\n", + "Camera closed\n", + "\n", + "Testing completed successfully!\n" + ] + } + ], + "source": [ + "\n", + "\n", + "if __name__ == \"__main__\":\n", + " print(\"GigE Camera Exposure Test Script\")\n", + " print(\"=\" * 40)\n", + " print(\"This script will test different exposure settings and save sample images.\")\n", + " print(\"Use this to find the optimal settings for your lighting conditions.\")\n", + " print()\n", + "\n", + " success = test_exposure_settings()\n", + "\n", + " if success:\n", + " print(\"\\nTesting completed successfully!\")\n", + " else:\n", + " print(\"\\nTesting failed!\")\n" + ] + }, + { + "cell_type": "code", + "execution_count": null, + "id": "ead8d889", + "metadata": {}, + "outputs": [], + "source": [] + } + ], + "metadata": { + "kernelspec": { + "display_name": "cc_pecan", + "language": "python", + "name": "python3" + }, + "language_info": { + "codemirror_mode": { + "name": "ipython", + "version": 3 + }, + "file_extension": ".py", + "mimetype": "text/x-python", + "name": "python", + "nbconvert_exporter": "python", + "pygments_lexer": "ipython3", + "version": "3.13.5" + } + }, + "nbformat": 4, + "nbformat_minor": 5 +} diff --git a/api/ai_agent/examples/notebooks/gige_camera_advanced.ipynb b/api/ai_agent/examples/notebooks/gige_camera_advanced.ipynb new file mode 100644 index 0000000..d4c7525 --- /dev/null +++ b/api/ai_agent/examples/notebooks/gige_camera_advanced.ipynb @@ -0,0 +1,385 @@ +{ + "cells": [ + { + "cell_type": "markdown", + "metadata": {}, + "source": [ + "# Advanced GigE Camera Configuration\n", + "\n", + "This notebook provides advanced testing and configuration for GigE cameras.\n", + "\n", + "## Features:\n", + "- Network interface detection\n", + "- GigE camera discovery\n", + "- Camera parameter configuration\n", + "- Performance testing\n", + "- Dual camera synchronization testing" + ] + }, + { + "cell_type": "code", + "execution_count": null, + "metadata": {}, + "outputs": [], + "source": [ + "import cv2\n", + "import numpy as np\n", + "import matplotlib.pyplot as plt\n", + "import subprocess\n", + "import socket\n", + "import threading\n", + "import time\n", + "from datetime import datetime\n", + "import os\n", + "from pathlib import Path\n", + "import json\n", + "\n", + "print(\"✅ Imports successful!\")" + ] + }, + { + "cell_type": "markdown", + "metadata": {}, + "source": [ + "## Network Interface Detection" + ] + }, + { + "cell_type": "code", + "execution_count": null, + "metadata": {}, + "outputs": [], + "source": [ + "def get_network_interfaces():\n", + " \"\"\"Get network interface information\"\"\"\n", + " try:\n", + " result = subprocess.run(['ip', 'addr', 'show'], capture_output=True, text=True)\n", + " print(\"🌐 Network Interfaces:\")\n", + " print(result.stdout)\n", + " \n", + " # Also check for GigE specific interfaces\n", + " result2 = subprocess.run(['ifconfig'], capture_output=True, text=True)\n", + " if result2.returncode == 0:\n", + " print(\"\\n📡 Interface Configuration:\")\n", + " print(result2.stdout)\n", + " except Exception as e:\n", + " print(f\"❌ Error getting network info: {e}\")\n", + "\n", + "get_network_interfaces()" + ] + }, + { + "cell_type": "markdown", + "metadata": {}, + "source": [ + "## GigE Camera Discovery" + ] + }, + { + "cell_type": "code", + "execution_count": null, + "metadata": {}, + "outputs": [], + "source": [ + "def discover_gige_cameras():\n", + " \"\"\"Attempt to discover GigE cameras on the network\"\"\"\n", + " print(\"🔍 Discovering GigE cameras...\")\n", + " \n", + " # Try different methods to find GigE cameras\n", + " methods = [\n", + " \"OpenCV with different backends\",\n", + " \"Network scanning\",\n", + " \"GStreamer pipeline testing\"\n", + " ]\n", + " \n", + " print(\"\\n1. Testing OpenCV backends:\")\n", + " backends = [\n", + " (cv2.CAP_GSTREAMER, \"GStreamer\"),\n", + " (cv2.CAP_V4L2, \"V4L2\"),\n", + " (cv2.CAP_FFMPEG, \"FFmpeg\"),\n", + " (cv2.CAP_ANY, \"Default\")\n", + " ]\n", + " \n", + " for backend_id, backend_name in backends:\n", + " print(f\" Testing {backend_name}...\")\n", + " for cam_id in range(5):\n", + " try:\n", + " cap = cv2.VideoCapture(cam_id, backend_id)\n", + " if cap.isOpened():\n", + " ret, frame = cap.read()\n", + " if ret:\n", + " print(f\" ✅ Camera {cam_id} accessible via {backend_name}\")\n", + " print(f\" Resolution: {frame.shape[1]}x{frame.shape[0]}\")\n", + " cap.release()\n", + " except Exception as e:\n", + " pass\n", + " \n", + " print(\"\\n2. Testing GStreamer pipelines:\")\n", + " # Common GigE camera GStreamer pipelines\n", + " gstreamer_pipelines = [\n", + " \"v4l2src device=/dev/video0 ! videoconvert ! appsink\",\n", + " \"v4l2src device=/dev/video1 ! videoconvert ! appsink\",\n", + " \"tcambin ! videoconvert ! appsink\", # For TIS cameras\n", + " \"aravis ! videoconvert ! appsink\", # For Aravis-supported cameras\n", + " ]\n", + " \n", + " for pipeline in gstreamer_pipelines:\n", + " try:\n", + " print(f\" Testing: {pipeline}\")\n", + " cap = cv2.VideoCapture(pipeline, cv2.CAP_GSTREAMER)\n", + " if cap.isOpened():\n", + " ret, frame = cap.read()\n", + " if ret:\n", + " print(f\" ✅ Pipeline works! Frame shape: {frame.shape}\")\n", + " else:\n", + " print(f\" ⚠️ Pipeline opened but no frames\")\n", + " else:\n", + " print(f\" ❌ Pipeline failed\")\n", + " cap.release()\n", + " except Exception as e:\n", + " print(f\" ❌ Error: {e}\")\n", + "\n", + "discover_gige_cameras()" + ] + }, + { + "cell_type": "markdown", + "metadata": {}, + "source": [ + "## Camera Parameter Configuration" + ] + }, + { + "cell_type": "code", + "execution_count": null, + "metadata": {}, + "outputs": [], + "source": [ + "def configure_camera_parameters(camera_id, backend=cv2.CAP_ANY):\n", + " \"\"\"Configure and test camera parameters\"\"\"\n", + " print(f\"⚙️ Configuring camera {camera_id}...\")\n", + " \n", + " cap = cv2.VideoCapture(camera_id, backend)\n", + " if not cap.isOpened():\n", + " print(f\"❌ Cannot open camera {camera_id}\")\n", + " return None\n", + " \n", + " # Get current parameters\n", + " current_params = {\n", + " 'width': cap.get(cv2.CAP_PROP_FRAME_WIDTH),\n", + " 'height': cap.get(cv2.CAP_PROP_FRAME_HEIGHT),\n", + " 'fps': cap.get(cv2.CAP_PROP_FPS),\n", + " 'brightness': cap.get(cv2.CAP_PROP_BRIGHTNESS),\n", + " 'contrast': cap.get(cv2.CAP_PROP_CONTRAST),\n", + " 'saturation': cap.get(cv2.CAP_PROP_SATURATION),\n", + " 'hue': cap.get(cv2.CAP_PROP_HUE),\n", + " 'gain': cap.get(cv2.CAP_PROP_GAIN),\n", + " 'exposure': cap.get(cv2.CAP_PROP_EXPOSURE),\n", + " 'auto_exposure': cap.get(cv2.CAP_PROP_AUTO_EXPOSURE),\n", + " 'white_balance': cap.get(cv2.CAP_PROP_WHITE_BALANCE_BLUE_U),\n", + " }\n", + " \n", + " print(\"📊 Current Camera Parameters:\")\n", + " for param, value in current_params.items():\n", + " print(f\" {param}: {value}\")\n", + " \n", + " # Test setting some parameters\n", + " print(\"\\n🔧 Testing parameter changes:\")\n", + " \n", + " # Try to set resolution (common GigE resolutions)\n", + " test_resolutions = [(1920, 1080), (1280, 720), (640, 480)]\n", + " for width, height in test_resolutions:\n", + " if cap.set(cv2.CAP_PROP_FRAME_WIDTH, width) and cap.set(cv2.CAP_PROP_FRAME_HEIGHT, height):\n", + " actual_width = cap.get(cv2.CAP_PROP_FRAME_WIDTH)\n", + " actual_height = cap.get(cv2.CAP_PROP_FRAME_HEIGHT)\n", + " print(f\" Resolution {width}x{height}: Set to {actual_width}x{actual_height}\")\n", + " break\n", + " \n", + " # Test FPS settings\n", + " for fps in [30, 60, 120]:\n", + " if cap.set(cv2.CAP_PROP_FPS, fps):\n", + " actual_fps = cap.get(cv2.CAP_PROP_FPS)\n", + " print(f\" FPS {fps}: Set to {actual_fps}\")\n", + " break\n", + " \n", + " # Capture test frame with new settings\n", + " ret, frame = cap.read()\n", + " if ret:\n", + " print(f\"\\n✅ Test frame captured: {frame.shape}\")\n", + " \n", + " # Display frame\n", + " plt.figure(figsize=(10, 6))\n", + " if len(frame.shape) == 3:\n", + " plt.imshow(cv2.cvtColor(frame, cv2.COLOR_BGR2RGB))\n", + " else:\n", + " plt.imshow(frame, cmap='gray')\n", + " plt.title(f\"Camera {camera_id} - Configured\")\n", + " plt.axis('off')\n", + " plt.show()\n", + " \n", + " # Save configuration and test image\n", + " timestamp = datetime.now().strftime(\"%Y%m%d_%H%M%S\")\n", + " \n", + " # Save image\n", + " img_path = f\"/storage/camera{camera_id}/configured_test_{timestamp}.jpg\"\n", + " cv2.imwrite(img_path, frame)\n", + " print(f\"💾 Test image saved: {img_path}\")\n", + " \n", + " # Save configuration\n", + " config_path = f\"/storage/camera{camera_id}/config_{timestamp}.json\"\n", + " with open(config_path, 'w') as f:\n", + " json.dump(current_params, f, indent=2)\n", + " print(f\"💾 Configuration saved: {config_path}\")\n", + " \n", + " cap.release()\n", + " return current_params\n", + "\n", + "# Test configuration (change camera_id as needed)\n", + "camera_to_configure = 0\n", + "config = configure_camera_parameters(camera_to_configure)" + ] + }, + { + "cell_type": "markdown", + "metadata": {}, + "source": [ + "## Dual Camera Testing" + ] + }, + { + "cell_type": "code", + "execution_count": null, + "metadata": {}, + "outputs": [], + "source": [ + "def test_dual_cameras(camera1_id=0, camera2_id=1, duration=5):\n", + " \"\"\"Test simultaneous capture from two cameras\"\"\"\n", + " print(f\"📷📷 Testing dual camera capture (cameras {camera1_id} and {camera2_id})...\")\n", + " \n", + " # Open both cameras\n", + " cap1 = cv2.VideoCapture(camera1_id)\n", + " cap2 = cv2.VideoCapture(camera2_id)\n", + " \n", + " if not cap1.isOpened():\n", + " print(f\"❌ Cannot open camera {camera1_id}\")\n", + " return\n", + " \n", + " if not cap2.isOpened():\n", + " print(f\"❌ Cannot open camera {camera2_id}\")\n", + " cap1.release()\n", + " return\n", + " \n", + " print(\"✅ Both cameras opened successfully\")\n", + " \n", + " # Capture test frames\n", + " ret1, frame1 = cap1.read()\n", + " ret2, frame2 = cap2.read()\n", + " \n", + " if ret1 and ret2:\n", + " print(f\"📊 Camera {camera1_id}: {frame1.shape}\")\n", + " print(f\"📊 Camera {camera2_id}: {frame2.shape}\")\n", + " \n", + " # Display both frames side by side\n", + " fig, (ax1, ax2) = plt.subplots(1, 2, figsize=(15, 6))\n", + " \n", + " if len(frame1.shape) == 3:\n", + " ax1.imshow(cv2.cvtColor(frame1, cv2.COLOR_BGR2RGB))\n", + " else:\n", + " ax1.imshow(frame1, cmap='gray')\n", + " ax1.set_title(f\"Camera {camera1_id}\")\n", + " ax1.axis('off')\n", + " \n", + " if len(frame2.shape) == 3:\n", + " ax2.imshow(cv2.cvtColor(frame2, cv2.COLOR_BGR2RGB))\n", + " else:\n", + " ax2.imshow(frame2, cmap='gray')\n", + " ax2.set_title(f\"Camera {camera2_id}\")\n", + " ax2.axis('off')\n", + " \n", + " plt.tight_layout()\n", + " plt.show()\n", + " \n", + " # Save test images\n", + " timestamp = datetime.now().strftime(\"%Y%m%d_%H%M%S\")\n", + " cv2.imwrite(f\"/storage/camera1/dual_test_{timestamp}.jpg\", frame1)\n", + " cv2.imwrite(f\"/storage/camera2/dual_test_{timestamp}.jpg\", frame2)\n", + " print(f\"💾 Dual camera test images saved with timestamp {timestamp}\")\n", + " \n", + " else:\n", + " print(\"❌ Failed to capture from one or both cameras\")\n", + " \n", + " # Test synchronized recording\n", + " print(f\"\\n🎥 Testing synchronized recording for {duration} seconds...\")\n", + " \n", + " # Setup video writers\n", + " timestamp = datetime.now().strftime(\"%Y%m%d_%H%M%S\")\n", + " \n", + " fourcc = cv2.VideoWriter_fourcc(*'mp4v')\n", + " fps = 30\n", + " \n", + " if ret1:\n", + " h1, w1 = frame1.shape[:2]\n", + " out1 = cv2.VideoWriter(f\"/storage/camera1/sync_test_{timestamp}.mp4\", fourcc, fps, (w1, h1))\n", + " \n", + " if ret2:\n", + " h2, w2 = frame2.shape[:2]\n", + " out2 = cv2.VideoWriter(f\"/storage/camera2/sync_test_{timestamp}.mp4\", fourcc, fps, (w2, h2))\n", + " \n", + " # Record synchronized video\n", + " start_time = time.time()\n", + " frame_count = 0\n", + " \n", + " while time.time() - start_time < duration:\n", + " ret1, frame1 = cap1.read()\n", + " ret2, frame2 = cap2.read()\n", + " \n", + " if ret1 and ret2:\n", + " out1.write(frame1)\n", + " out2.write(frame2)\n", + " frame_count += 1\n", + " else:\n", + " print(f\"⚠️ Frame drop at frame {frame_count}\")\n", + " \n", + " # Cleanup\n", + " cap1.release()\n", + " cap2.release()\n", + " if 'out1' in locals():\n", + " out1.release()\n", + " if 'out2' in locals():\n", + " out2.release()\n", + " \n", + " elapsed = time.time() - start_time\n", + " actual_fps = frame_count / elapsed\n", + " \n", + " print(f\"✅ Synchronized recording complete\")\n", + " print(f\"📊 Recorded {frame_count} frames in {elapsed:.2f}s\")\n", + " print(f\"📊 Actual FPS: {actual_fps:.2f}\")\n", + " print(f\"💾 Videos saved with timestamp {timestamp}\")\n", + "\n", + "# Test dual cameras (adjust camera IDs as needed)\n", + "test_dual_cameras(0, 1, duration=3)" + ] + } + ], + "metadata": { + "kernelspec": { + "display_name": "usda-vision-cameras", + "language": "python", + "name": "usda-vision-cameras" + }, + "language_info": { + "codemirror_mode": { + "name": "ipython", + "version": 3 + }, + "file_extension": ".py", + "mimetype": "text/x-python", + "name": "python", + "nbconvert_exporter": "python", + "pygments_lexer": "ipython3", + "version": "3.11.0" + } + }, + "nbformat": 4, + "nbformat_minor": 4 +} diff --git a/api/ai_agent/examples/notebooks/mqtt test.ipynb b/api/ai_agent/examples/notebooks/mqtt test.ipynb new file mode 100644 index 0000000..6be4f7d --- /dev/null +++ b/api/ai_agent/examples/notebooks/mqtt test.ipynb @@ -0,0 +1,146 @@ +{ + "cells": [ + { + "cell_type": "code", + "execution_count": 1, + "id": "3b92c632", + "metadata": {}, + "outputs": [], + "source": [ + "import paho.mqtt.client as mqtt\n", + "import time\n" + ] + }, + { + "cell_type": "code", + "execution_count": 2, + "id": "a6753fb1", + "metadata": {}, + "outputs": [ + { + "name": "stderr", + "output_type": "stream", + "text": [ + "/tmp/ipykernel_2342/243927247.py:34: DeprecationWarning: Callback API version 1 is deprecated, update to latest version\n", + " client = mqtt.Client(mqtt.CallbackAPIVersion.VERSION1) # Use VERSION1 for broader compatibility\n" + ] + }, + { + "name": "stdout", + "output_type": "stream", + "text": [ + "Connecting to MQTT broker at 192.168.1.110:1883...\n", + "Successfully connected to MQTT Broker!\n", + "Subscribed to topic: 'vision/vibratory_conveyor/state'\n", + "Listening for messages... (Press Ctrl+C to stop)\n", + "\n", + "--- MQTT MESSAGE RECEIVED! ---\n", + " Topic: vision/vibratory_conveyor/state\n", + " Payload: on\n", + " Time: 2025-07-25 21:03:21\n", + "------------------------------\n", + "\n", + "\n", + "--- MQTT MESSAGE RECEIVED! ---\n", + " Topic: vision/vibratory_conveyor/state\n", + " Payload: off\n", + " Time: 2025-07-25 21:05:26\n", + "------------------------------\n", + "\n", + "\n", + "Stopping MQTT listener.\n" + ] + } + ], + "source": [ + "\n", + "# --- MQTT Broker Configuration ---\n", + "# Your Home Assistant's IP address (where your MQTT broker is running)\n", + "MQTT_BROKER_HOST = \"192.168.1.110\"\n", + "MQTT_BROKER_PORT = 1883\n", + "# IMPORTANT: Replace with your actual MQTT broker username and password if you have one set up\n", + "# (These are NOT your Home Assistant login credentials, but for the Mosquitto add-on, if used)\n", + "# MQTT_BROKER_USERNAME = \"pecan\" # e.g., \"homeassistant_mqtt_user\"\n", + "# MQTT_BROKER_PASSWORD = \"whatever\" # e.g., \"SuperSecurePassword123!\"\n", + "\n", + "# --- Topic to Subscribe To ---\n", + "# This MUST exactly match the topic you set in your Home Assistant automation\n", + "MQTT_TOPIC = \"vision/vibratory_conveyor/state\" # <<<< Make sure this is correct!\n", + "MQTT_TOPIC = \"vision/blower_separator/state\" # <<<< Make sure this is correct!\n", + "\n", + "# The callback for when the client receives a CONNACK response from the server.\n", + "def on_connect(client, userdata, flags, rc):\n", + " if rc == 0:\n", + " print(\"Successfully connected to MQTT Broker!\")\n", + " client.subscribe(MQTT_TOPIC)\n", + " print(f\"Subscribed to topic: '{MQTT_TOPIC}'\")\n", + " print(\"Listening for messages... (Press Ctrl+C to stop)\")\n", + " else:\n", + " print(f\"Failed to connect, return code {rc}\\n\")\n", + "\n", + "# The callback for when a PUBLISH message is received from the server.\n", + "def on_message(client, userdata, msg):\n", + " received_payload = msg.payload.decode()\n", + " print(f\"\\n--- MQTT MESSAGE RECEIVED! ---\")\n", + " print(f\" Topic: {msg.topic}\")\n", + " print(f\" Payload: {received_payload}\")\n", + " print(f\" Time: {time.strftime('%Y-%m-%d %H:%M:%S')}\")\n", + " print(f\"------------------------------\\n\")\n", + "\n", + "# Create an MQTT client instance\n", + "client = mqtt.Client(mqtt.CallbackAPIVersion.VERSION1) # Use VERSION1 for broader compatibility\n", + "\n", + "# Set callback functions\n", + "client.on_connect = on_connect\n", + "client.on_message = on_message\n", + "\n", + "# Set username and password if required\n", + "# (Only uncomment and fill these if your MQTT broker requires authentication)\n", + "# client.username_pw_set(MQTT_BROKER_USERNAME, MQTT_BROKER_PASSWORD)\n", + "\n", + "try:\n", + " # Attempt to connect to the MQTT broker\n", + " print(f\"Connecting to MQTT broker at {MQTT_BROKER_HOST}:{MQTT_BROKER_PORT}...\")\n", + " client.connect(MQTT_BROKER_HOST, MQTT_BROKER_PORT, 60)\n", + "\n", + " # Start the MQTT loop. This runs in the background and processes messages.\n", + " client.loop_forever()\n", + "\n", + "except KeyboardInterrupt:\n", + " print(\"\\nStopping MQTT listener.\")\n", + " client.disconnect() # Disconnect gracefully\n", + "except Exception as e:\n", + " print(f\"An unexpected error occurred: {e}\")" + ] + }, + { + "cell_type": "code", + "execution_count": null, + "id": "56531671", + "metadata": {}, + "outputs": [], + "source": [] + } + ], + "metadata": { + "kernelspec": { + "display_name": "USDA-vision-cameras", + "language": "python", + "name": "python3" + }, + "language_info": { + "codemirror_mode": { + "name": "ipython", + "version": 3 + }, + "file_extension": ".py", + "mimetype": "text/x-python", + "name": "python", + "nbconvert_exporter": "python", + "pygments_lexer": "ipython3", + "version": "3.11.2" + } + }, + "nbformat": 4, + "nbformat_minor": 5 +} diff --git a/api/ai_agent/guides/AI_AGENT_INSTRUCTIONS.md b/api/ai_agent/guides/AI_AGENT_INSTRUCTIONS.md new file mode 100644 index 0000000..dedd89e --- /dev/null +++ b/api/ai_agent/guides/AI_AGENT_INSTRUCTIONS.md @@ -0,0 +1,175 @@ +# Instructions for AI Agent: Auto-Recording Feature Integration + +## 🎯 Task Overview +Update the React application to support the new auto-recording feature that has been added to the USDA Vision Camera System backend. + +## 📋 What You Need to Know + +### System Context +- **Camera 1** monitors the **vibratory conveyor** (conveyor/cracker cam) +- **Camera 2** monitors the **blower separator** machine +- Auto-recording automatically starts when machines turn ON and stops when they turn OFF +- The system includes retry logic for failed recording attempts +- Manual recording always takes precedence over auto-recording + +### New Backend Capabilities +The backend now supports: +1. **Automatic recording** triggered by MQTT machine state changes +2. **Retry mechanism** for failed recording attempts (configurable retries and delays) +3. **Status tracking** for auto-recording state, failures, and attempts +4. **API endpoints** for enabling/disabling and monitoring auto-recording + +## 🔧 Required React App Changes + +### 1. Update TypeScript Interfaces + +Add these new fields to existing `CameraStatusResponse`: +```typescript +interface CameraStatusResponse { + // ... existing fields + auto_recording_enabled: boolean; + auto_recording_active: boolean; + auto_recording_failure_count: number; + auto_recording_last_attempt?: string; + auto_recording_last_error?: string; +} +``` + +Add new response types: +```typescript +interface AutoRecordingConfigResponse { + success: boolean; + message: string; + camera_name: string; + enabled: boolean; +} + +interface AutoRecordingStatusResponse { + running: boolean; + auto_recording_enabled: boolean; + retry_queue: Record; + enabled_cameras: string[]; +} +``` + +### 2. Add New API Endpoints + +```typescript +// Enable auto-recording for a camera +POST /cameras/{camera_name}/auto-recording/enable + +// Disable auto-recording for a camera +POST /cameras/{camera_name}/auto-recording/disable + +// Get overall auto-recording system status +GET /auto-recording/status +``` + +### 3. UI Components to Add/Update + +#### Camera Status Display +- Add auto-recording status badge/indicator +- Show auto-recording enabled/disabled state +- Display failure count if > 0 +- Show last error message if any +- Distinguish between manual and auto-recording states + +#### Auto-Recording Controls +- Toggle switch to enable/disable auto-recording per camera +- System-wide auto-recording status display +- Retry queue information +- Machine state correlation display + +#### Error Handling +- Clear display of auto-recording failures +- Retry attempt information +- Last attempt timestamp +- Quick retry/reset actions + +### 4. Visual Design Guidelines + +**Status Priority (highest to lowest):** +1. Manual Recording (red/prominent) - user initiated +2. Auto-Recording Active (green) - machine ON, recording +3. Auto-Recording Enabled (blue) - ready but machine OFF +4. Auto-Recording Disabled (gray) - feature disabled + +**Machine Correlation:** +- Show machine name next to camera (e.g., "Vibratory Conveyor", "Blower Separator") +- Display machine ON/OFF status +- Alert if machine is ON but auto-recording failed + +## 🎨 Specific Implementation Tasks + +### Task 1: Update Camera Cards +- Add auto-recording status indicators +- Add enable/disable toggle controls +- Show machine state correlation +- Display failure information when relevant + +### Task 2: Create Auto-Recording Dashboard +- Overall system status +- List of enabled cameras +- Active retry queue display +- Recent events/errors + +### Task 3: Update Recording Status Logic +- Distinguish between manual and auto-recording +- Show appropriate controls based on recording type +- Handle manual override scenarios + +### Task 4: Add Error Handling +- Display auto-recording failures clearly +- Show retry attempts and timing +- Provide manual retry options + +## 📱 User Experience Requirements + +### Key Behaviors +1. **Non-Intrusive:** Auto-recording status shouldn't clutter the main interface +2. **Clear Hierarchy:** Manual controls should be more prominent than auto-recording +3. **Informative:** Users should understand why recording started/stopped +4. **Actionable:** Clear options to enable/disable or retry failed attempts + +### Mobile Considerations +- Auto-recording controls should work well on mobile +- Status information should be readable on small screens +- Consider collapsible sections for detailed information + +## 🔍 Testing Requirements + +Ensure the React app correctly handles: +- [ ] Toggling auto-recording on/off per camera +- [ ] Displaying real-time status updates +- [ ] Showing error states and retry information +- [ ] Manual recording override scenarios +- [ ] Machine state changes and correlation +- [ ] Mobile interface functionality + +## 📚 Reference Files + +Key files to review for implementation details: +- `AUTO_RECORDING_FEATURE_GUIDE.md` - Comprehensive technical details +- `api-endpoints.http` - API endpoint documentation +- `config.json` - Configuration structure +- `usda_vision_system/api/models.py` - Response type definitions + +## 🎯 Success Criteria + +The React app should: +1. **Display** auto-recording status for each camera clearly +2. **Allow** users to enable/disable auto-recording per camera +3. **Show** machine state correlation and recording triggers +4. **Handle** error states and retry scenarios gracefully +5. **Maintain** existing manual recording functionality +6. **Provide** clear visual hierarchy between manual and auto-recording + +## 💡 Implementation Tips + +1. **Start Small:** Begin with basic status display, then add controls +2. **Use Existing Patterns:** Follow the current app's design patterns +3. **Test Incrementally:** Test each feature as you add it +4. **Consider State Management:** Update your state management to handle new data +5. **Mobile First:** Ensure mobile usability from the start + +The goal is to seamlessly integrate auto-recording capabilities while maintaining the existing user experience and adding valuable automation features for the camera operators. diff --git a/api/ai_agent/guides/AI_INTEGRATION_GUIDE.md b/api/ai_agent/guides/AI_INTEGRATION_GUIDE.md new file mode 100644 index 0000000..9d881ee --- /dev/null +++ b/api/ai_agent/guides/AI_INTEGRATION_GUIDE.md @@ -0,0 +1,595 @@ +# 🤖 AI Integration Guide: USDA Vision Camera Streaming for React Projects + +This guide is specifically designed for AI assistants to understand and implement the USDA Vision Camera streaming functionality in React applications. + +## 📋 System Overview + +The USDA Vision Camera system provides live video streaming through REST API endpoints. The streaming uses MJPEG format which is natively supported by HTML `` tags and can be easily integrated into React components. + +### Key Characteristics: +- **Base URL**: `http://vision:8000` (production) or `http://localhost:8000` (development) +- **Stream Format**: MJPEG (Motion JPEG) +- **Content-Type**: `multipart/x-mixed-replace; boundary=frame` +- **Authentication**: None (add if needed for production) +- **CORS**: Enabled for all origins (configure for production) + +### Base URL Configuration: +- **Production**: `http://vision:8000` (requires hostname setup) +- **Development**: `http://localhost:8000` (local testing) +- **Custom IP**: `http://192.168.1.100:8000` (replace with actual IP) +- **Custom hostname**: Configure DNS or /etc/hosts as needed + +## 🔌 API Endpoints Reference + +### 1. Get Camera List +```http +GET /cameras +``` +**Response:** +```json +{ + "camera1": { + "name": "camera1", + "status": "connected", + "is_recording": false, + "last_checked": "2025-01-28T10:30:00", + "device_info": {...} + }, + "camera2": {...} +} +``` + +### 2. Start Camera Stream +```http +POST /cameras/{camera_name}/start-stream +``` +**Response:** +```json +{ + "success": true, + "message": "Started streaming for camera camera1" +} +``` + +### 3. Stop Camera Stream +```http +POST /cameras/{camera_name}/stop-stream +``` +**Response:** +```json +{ + "success": true, + "message": "Stopped streaming for camera camera1" +} +``` + +### 4. Live Video Stream +```http +GET /cameras/{camera_name}/stream +``` +**Response:** MJPEG video stream +**Usage:** Set as `src` attribute of HTML `` element + +## ⚛️ React Integration Examples + +### Basic Camera Stream Component + +```jsx +import React, { useState, useEffect } from 'react'; + +const CameraStream = ({ cameraName, apiBaseUrl = 'http://vision:8000' }) => { + const [isStreaming, setIsStreaming] = useState(false); + const [error, setError] = useState(null); + const [loading, setLoading] = useState(false); + + const startStream = async () => { + setLoading(true); + setError(null); + + try { + const response = await fetch(`${apiBaseUrl}/cameras/${cameraName}/start-stream`, { + method: 'POST', + headers: { + 'Content-Type': 'application/json', + }, + }); + + if (response.ok) { + setIsStreaming(true); + } else { + const errorData = await response.json(); + setError(errorData.detail || 'Failed to start stream'); + } + } catch (err) { + setError(`Network error: ${err.message}`); + } finally { + setLoading(false); + } + }; + + const stopStream = async () => { + setLoading(true); + + try { + const response = await fetch(`${apiBaseUrl}/cameras/${cameraName}/stop-stream`, { + method: 'POST', + headers: { + 'Content-Type': 'application/json', + }, + }); + + if (response.ok) { + setIsStreaming(false); + } else { + const errorData = await response.json(); + setError(errorData.detail || 'Failed to stop stream'); + } + } catch (err) { + setError(`Network error: ${err.message}`); + } finally { + setLoading(false); + } + }; + + return ( +
+

Camera: {cameraName}

+ + {/* Video Stream */} +
+ {isStreaming ? ( + {`${cameraName} setError('Stream connection lost')} + /> + ) : ( +
+ No Stream Active +
+ )} +
+ + {/* Controls */} +
+ + + +
+ + {/* Error Display */} + {error && ( +
+ Error: {error} +
+ )} +
+ ); +}; + +export default CameraStream; +``` + +### Multi-Camera Dashboard Component + +```jsx +import React, { useState, useEffect } from 'react'; +import CameraStream from './CameraStream'; + +const CameraDashboard = ({ apiBaseUrl = 'http://vision:8000' }) => { + const [cameras, setCameras] = useState({}); + const [loading, setLoading] = useState(true); + const [error, setError] = useState(null); + + useEffect(() => { + fetchCameras(); + + // Refresh camera status every 30 seconds + const interval = setInterval(fetchCameras, 30000); + return () => clearInterval(interval); + }, []); + + const fetchCameras = async () => { + try { + const response = await fetch(`${apiBaseUrl}/cameras`); + if (response.ok) { + const data = await response.json(); + setCameras(data); + setError(null); + } else { + setError('Failed to fetch cameras'); + } + } catch (err) { + setError(`Network error: ${err.message}`); + } finally { + setLoading(false); + } + }; + + if (loading) { + return
Loading cameras...
; + } + + if (error) { + return ( +
+ Error: {error} + +
+ ); + } + + return ( +
+

USDA Vision Camera Dashboard

+ +
+ {Object.entries(cameras).map(([cameraName, cameraInfo]) => ( +
+ + + {/* Camera Status */} +
+
Status: {cameraInfo.status}
+
Recording: {cameraInfo.is_recording ? 'Yes' : 'No'}
+
Last Checked: {new Date(cameraInfo.last_checked).toLocaleString()}
+
+
+ ))} +
+
+ ); +}; + +export default CameraDashboard; +``` + +### Custom Hook for Camera Management + +```jsx +import { useState, useEffect, useCallback } from 'react'; + +const useCameraStream = (cameraName, apiBaseUrl = 'http://vision:8000') => { + const [isStreaming, setIsStreaming] = useState(false); + const [loading, setLoading] = useState(false); + const [error, setError] = useState(null); + + const startStream = useCallback(async () => { + setLoading(true); + setError(null); + + try { + const response = await fetch(`${apiBaseUrl}/cameras/${cameraName}/start-stream`, { + method: 'POST', + }); + + if (response.ok) { + setIsStreaming(true); + return { success: true }; + } else { + const errorData = await response.json(); + const errorMsg = errorData.detail || 'Failed to start stream'; + setError(errorMsg); + return { success: false, error: errorMsg }; + } + } catch (err) { + const errorMsg = `Network error: ${err.message}`; + setError(errorMsg); + return { success: false, error: errorMsg }; + } finally { + setLoading(false); + } + }, [cameraName, apiBaseUrl]); + + const stopStream = useCallback(async () => { + setLoading(true); + + try { + const response = await fetch(`${apiBaseUrl}/cameras/${cameraName}/stop-stream`, { + method: 'POST', + }); + + if (response.ok) { + setIsStreaming(false); + return { success: true }; + } else { + const errorData = await response.json(); + const errorMsg = errorData.detail || 'Failed to stop stream'; + setError(errorMsg); + return { success: false, error: errorMsg }; + } + } catch (err) { + const errorMsg = `Network error: ${err.message}`; + setError(errorMsg); + return { success: false, error: errorMsg }; + } finally { + setLoading(false); + } + }, [cameraName, apiBaseUrl]); + + const getStreamUrl = useCallback(() => { + return `${apiBaseUrl}/cameras/${cameraName}/stream?t=${Date.now()}`; + }, [cameraName, apiBaseUrl]); + + return { + isStreaming, + loading, + error, + startStream, + stopStream, + getStreamUrl, + }; +}; + +export default useCameraStream; +``` + +## 🎨 Styling with Tailwind CSS + +```jsx +const CameraStreamTailwind = ({ cameraName }) => { + const { isStreaming, loading, error, startStream, stopStream, getStreamUrl } = useCameraStream(cameraName); + + return ( +
+

Camera: {cameraName}

+ + {/* Stream Container */} +
+ {isStreaming ? ( + {`${cameraName} setError('Stream connection lost')} + /> + ) : ( +
+ No Stream Active +
+ )} +
+ + {/* Controls */} +
+ + + +
+ + {/* Error Display */} + {error && ( +
+ Error: {error} +
+ )} +
+ ); +}; +``` + +## 🔧 Configuration Options + +### Environment Variables (.env) +```env +# Production configuration (using 'vision' hostname) +REACT_APP_CAMERA_API_URL=http://vision:8000 +REACT_APP_STREAM_REFRESH_INTERVAL=30000 +REACT_APP_STREAM_TIMEOUT=10000 + +# Development configuration (using localhost) +# REACT_APP_CAMERA_API_URL=http://localhost:8000 + +# Custom IP configuration +# REACT_APP_CAMERA_API_URL=http://192.168.1.100:8000 +``` + +### API Configuration +```javascript +const apiConfig = { + baseUrl: process.env.REACT_APP_CAMERA_API_URL || 'http://vision:8000', + timeout: parseInt(process.env.REACT_APP_STREAM_TIMEOUT) || 10000, + refreshInterval: parseInt(process.env.REACT_APP_STREAM_REFRESH_INTERVAL) || 30000, +}; +``` + +### Hostname Setup Guide +```bash +# Option 1: Add to /etc/hosts (Linux/Mac) +echo "127.0.0.1 vision" | sudo tee -a /etc/hosts + +# Option 2: Add to hosts file (Windows) +# Add to C:\Windows\System32\drivers\etc\hosts: +# 127.0.0.1 vision + +# Option 3: Configure DNS +# Point 'vision' hostname to your server's IP address + +# Verify hostname resolution +ping vision +``` + +## 🚨 Important Implementation Notes + +### 1. MJPEG Stream Handling +- Use HTML `` tag with `src` pointing to stream endpoint +- Add timestamp query parameter to prevent caching: `?t=${Date.now()}` +- Handle `onError` event for connection issues + +### 2. Error Handling +- Network errors (fetch failures) +- HTTP errors (4xx, 5xx responses) +- Stream connection errors (img onError) +- Timeout handling for long requests + +### 3. Performance Considerations +- Streams consume bandwidth continuously +- Stop streams when components unmount +- Limit concurrent streams based on system capacity +- Consider lazy loading for multiple cameras + +### 4. State Management +- Track streaming state per camera +- Handle loading states during API calls +- Manage error states with user feedback +- Refresh camera list periodically + +## 📱 Mobile Considerations + +```jsx +// Responsive design for mobile +const mobileStyles = { + container: { + padding: '10px', + maxWidth: '100vw', + }, + stream: { + width: '100%', + maxWidth: '100vw', + height: 'auto', + }, + controls: { + display: 'flex', + flexDirection: 'column', + gap: '8px', + }, +}; +``` + +## 🧪 Testing Integration + +```javascript +// Test API connectivity +const testConnection = async () => { + try { + const response = await fetch(`${apiBaseUrl}/health`); + return response.ok; + } catch { + return false; + } +}; + +// Test camera availability +const testCamera = async (cameraName) => { + try { + const response = await fetch(`${apiBaseUrl}/cameras/${cameraName}/test-connection`, { + method: 'POST', + }); + return response.ok; + } catch { + return false; + } +}; +``` + +## 📁 Additional Files for AI Integration + +### TypeScript Definitions +- `camera-api.types.ts` - Complete TypeScript definitions for all API types +- `streaming-api.http` - REST Client file with all streaming endpoints +- `STREAMING_GUIDE.md` - Comprehensive user guide for streaming functionality + +### Quick Integration Checklist for AI Assistants + +1. **Copy TypeScript types** from `camera-api.types.ts` +2. **Use API endpoints** from `streaming-api.http` +3. **Implement error handling** as shown in examples +4. **Add CORS configuration** if needed for production +5. **Test with multiple cameras** using provided examples + +### Key Integration Points + +- **Stream URL Format**: `${baseUrl}/cameras/${cameraName}/stream?t=${Date.now()}` +- **Start Stream**: `POST /cameras/{name}/start-stream` +- **Stop Stream**: `POST /cameras/{name}/stop-stream` +- **Camera List**: `GET /cameras` +- **Error Handling**: Always wrap in try-catch blocks +- **Loading States**: Implement for better UX + +### Production Considerations + +- Configure CORS for specific origins +- Add authentication if required +- Implement rate limiting +- Monitor system resources with multiple streams +- Add reconnection logic for network issues + +This documentation provides everything an AI assistant needs to integrate the USDA Vision Camera streaming functionality into React applications, including complete code examples, error handling, and best practices. diff --git a/api/ai_agent/references/api-endpoints.http b/api/ai_agent/references/api-endpoints.http new file mode 100644 index 0000000..545fe39 --- /dev/null +++ b/api/ai_agent/references/api-endpoints.http @@ -0,0 +1,542 @@ +############################################################################### +# USDA Vision Camera System - Complete API Endpoints Documentation +# +# CONFIGURATION: +# - Default Base URL: http://localhost:8000 (local development) +# - Production Base URL: http://vision:8000 (when using hostname 'vision') +# - Custom hostname: Update @baseUrl variable below +# +# HOSTNAME SETUP: +# To use 'vision' hostname instead of 'localhost': +# 1. Add to /etc/hosts: 127.0.0.1 vision +# 2. Or configure DNS to point 'vision' to the server IP +# 3. Update camera_preview.html: API_BASE = 'http://vision:8000' +############################################################################### + +# Base URL Configuration - Change this to match your setup +@baseUrl = http://vision:8000 +# Alternative configurations: +# @baseUrl = http://localhost:8000 # Local development +# @baseUrl = http://192.168.1.100:8000 # Specific IP address +# @baseUrl = http://your-server:8000 # Custom hostname + +############################################################################### +# CONFIGURATION GUIDE +############################################################################### + +### HOSTNAME CONFIGURATION OPTIONS: + +# Option 1: Using 'vision' hostname (recommended for production) +# - Requires hostname resolution setup +# - Add to /etc/hosts: 127.0.0.1 vision +# - Or configure DNS: vision -> server IP address +# - Update camera_preview.html: API_BASE = 'http://vision:8000' +# - Set @baseUrl = http://vision:8000 + +# Option 2: Using localhost (development) +# - Works immediately on local machine +# - Set @baseUrl = http://localhost:8000 +# - Update camera_preview.html: API_BASE = 'http://localhost:8000' + +# Option 3: Using specific IP address +# - Replace with actual server IP +# - Set @baseUrl = http://192.168.1.100:8000 +# - Update camera_preview.html: API_BASE = 'http://192.168.1.100:8000' + +# Option 4: Custom hostname +# - Configure DNS or /etc/hosts for custom name +# - Set @baseUrl = http://your-custom-name:8000 +# - Update camera_preview.html: API_BASE = 'http://your-custom-name:8000' + +### NETWORK CONFIGURATION: +# - Default port: 8000 +# - CORS enabled for all origins (configure for production) +# - No authentication required (add if needed) + +### CLIENT CONFIGURATION FILES TO UPDATE: +# 1. camera_preview.html - Update API_BASE constant +# 2. React projects - Update apiConfig.baseUrl +# 3. This file - Update @baseUrl variable +# 4. Any custom scripts - Update base URL + +### TESTING CONNECTIVITY: +# Test if the API is reachable: +GET {{baseUrl}}/health + +############################################################################### +# SYSTEM ENDPOINTS +############################################################################### + +### Root endpoint - API information +GET {{baseUrl}}/ +# Response: SuccessResponse +# { +# "success": true, +# "message": "USDA Vision Camera System API", +# "data": null, +# "timestamp": "2025-07-28T12:00:00" +# } + +### + +### Health check +GET http://localhost:8000/health +# Response: Simple health status +# { +# "status": "healthy", +# "timestamp": "2025-07-28T12:00:00" +# } + +### + +### Get system status +GET http://localhost:8000/system/status +# Response: SystemStatusResponse +# { +# "system_started": true, +# "mqtt_connected": true, +# "last_mqtt_message": "2025-07-28T12:00:00", +# "machines": { +# "vibratory_conveyor": { +# "name": "vibratory_conveyor", +# "state": "off", +# "last_updated": "2025-07-28T12:00:00" +# } +# }, +# "cameras": { +# "camera1": { +# "name": "camera1", +# "status": "connected", +# "is_recording": false +# } +# }, +# "active_recordings": 0, +# "total_recordings": 5, +# "uptime_seconds": 3600.5 +# } + +############################################################################### +# MACHINE ENDPOINTS +############################################################################### + +### Get all machines status +GET http://localhost:8000/machines +# Response: Dict[str, MachineStatusResponse] +# { +# "vibratory_conveyor": { +# "name": "vibratory_conveyor", +# "state": "off", +# "last_updated": "2025-07-28T12:00:00", +# "last_message": "off", +# "mqtt_topic": "vision/vibratory_conveyor/state" +# }, +# "blower_separator": { +# "name": "blower_separator", +# "state": "on", +# "last_updated": "2025-07-28T12:00:00", +# "last_message": "on", +# "mqtt_topic": "vision/blower_separator/state" +# } +# } + +############################################################################### +# MQTT ENDPOINTS +############################################################################### + +### Get MQTT status and statistics +GET http://localhost:8000/mqtt/status +# Response: MQTTStatusResponse +# { +# "connected": true, +# "broker_host": "192.168.1.110", +# "broker_port": 1883, +# "subscribed_topics": [ +# "vision/vibratory_conveyor/state", +# "vision/blower_separator/state" +# ], +# "last_message_time": "2025-07-28T12:00:00", +# "message_count": 42, +# "error_count": 0, +# "uptime_seconds": 3600.5 +# } + +### Get recent MQTT events history +GET http://localhost:8000/mqtt/events +# Optional query parameter: limit (default: 5, max: 50) +# Response: MQTTEventsHistoryResponse +# { +# "events": [ +# { +# "machine_name": "vibratory_conveyor", +# "topic": "vision/vibratory_conveyor/state", +# "payload": "on", +# "normalized_state": "on", +# "timestamp": "2025-07-28T15:30:45.123456", +# "message_number": 15 +# }, +# { +# "machine_name": "blower_separator", +# "topic": "vision/blower_separator/state", +# "payload": "off", +# "normalized_state": "off", +# "timestamp": "2025-07-28T15:29:12.654321", +# "message_number": 14 +# } +# ], +# "total_events": 15, +# "last_updated": "2025-07-28T15:30:45.123456" +# } + +### Get recent MQTT events with custom limit +GET http://localhost:8000/mqtt/events?limit=10 + +############################################################################### +# CAMERA ENDPOINTS +############################################################################### + +### Get all cameras status +GET http://localhost:8000/cameras +# Response: Dict[str, CameraStatusResponse] +# { +# "camera1": { +# "name": "camera1", +# "status": "connected", +# "is_recording": false, +# "last_checked": "2025-07-28T12:00:00", +# "last_error": null, +# "device_info": { +# "friendly_name": "MindVision Camera", +# "serial_number": "ABC123" +# }, +# "current_recording_file": null, +# "recording_start_time": null +# } +# } + +### + +### Get specific camera status +GET http://localhost:8000/cameras/camera1/status +### Get specific camera status +GET http://localhost:8000/cameras/camera2/status +# Response: CameraStatusResponse (same as above for single camera) + +############################################################################### +# RECORDING CONTROL ENDPOINTS +############################################################################### + +### Start recording (with all optional parameters) +POST http://localhost:8000/cameras/camera1/start-recording +Content-Type: application/json + +{ + "filename": "test_recording.avi", + "exposure_ms": 1.5, + "gain": 3.0, + "fps": 10.0 +} +# Request Parameters (all optional): +# - filename: string - Custom filename (datetime prefix auto-added) +# - exposure_ms: float - Exposure time in milliseconds +# - gain: float - Camera gain value +# - fps: float - Target frames per second (0 = maximum speed, omit = use config default) +# +# Response: StartRecordingResponse +# { +# "success": true, +# "message": "Recording started for camera1", +# "filename": "20250728_120000_test_recording.avi" +# } + +### + +### Start recording (minimal - only filename) +POST http://localhost:8000/cameras/camera1/start-recording +Content-Type: application/json + +{ + "filename": "simple_test.avi" +} + +### + +### Start recording (only camera settings) +POST http://localhost:8000/cameras/camera1/start-recording +Content-Type: application/json + +{ + "exposure_ms": 2.0, + "gain": 4.0, + "fps": 0 +} + +### + +### Start recording (empty body - all defaults) +POST http://localhost:8000/cameras/camera1/start-recording +Content-Type: application/json + +{} + +### + +### Stop recording +POST http://localhost:8000/cameras/camera1/stop-recording +POST http://localhost:8000/cameras/camera2/stop-recording +# No request body required +# Response: StopRecordingResponse +# { +# "success": true, +# "message": "Recording stopped for camera1", +# "duration_seconds": 45.2 +# } + +############################################################################### +# AUTO-RECORDING CONTROL ENDPOINTS +############################################################################### + +### Enable auto-recording for a camera +POST http://localhost:8000/cameras/camera1/auto-recording/enable +POST http://localhost:8000/cameras/camera2/auto-recording/enable +# No request body required +# Response: AutoRecordingConfigResponse +# { +# "success": true, +# "message": "Auto-recording enabled for camera1", +# "camera_name": "camera1", +# "enabled": true +# } + +### + +### Disable auto-recording for a camera +POST http://localhost:8000/cameras/camera1/auto-recording/disable +POST http://localhost:8000/cameras/camera2/auto-recording/disable +# No request body required +# Response: AutoRecordingConfigResponse +# { +# "success": true, +# "message": "Auto-recording disabled for camera1", +# "camera_name": "camera1", +# "enabled": false +# } + +### + +### Get auto-recording manager status +GET http://localhost:8000/auto-recording/status +# Response: AutoRecordingStatusResponse +# { +# "running": true, +# "auto_recording_enabled": true, +# "retry_queue": {}, +# "enabled_cameras": ["camera1", "camera2"] +# } + +############################################################################### +# CAMERA RECOVERY & DIAGNOSTICS ENDPOINTS +############################################################################### + +### Test camera connection +POST http://localhost:8000/cameras/camera1/test-connection +POST http://localhost:8000/cameras/camera2/test-connection +# No request body required +# Response: CameraTestResponse +# { +# "success": true, +# "message": "Camera camera1 connection test passed", +# "camera_name": "camera1", +# "timestamp": "2025-07-28T12:00:00" +# } + +### + +### Reconnect camera (soft recovery) +POST http://localhost:8000/cameras/camera1/reconnect +POST http://localhost:8000/cameras/camera2/reconnect +# No request body required +# Response: CameraRecoveryResponse +# { +# "success": true, +# "message": "Camera camera1 reconnected successfully", +# "camera_name": "camera1", +# "operation": "reconnect", +# "timestamp": "2025-07-28T12:00:00" +# } + +### + +### Restart camera grab process +POST http://localhost:8000/cameras/camera1/restart-grab +POST http://localhost:8000/cameras/camera2/restart-grab +# Response: CameraRecoveryResponse (same structure as reconnect) + +### + +### Reset camera timestamp +POST http://localhost:8000/cameras/camera1/reset-timestamp +POST http://localhost:8000/cameras/camera2/reset-timestamp +# Response: CameraRecoveryResponse (same structure as reconnect) + +### + +### Full camera reset (hard recovery) +POST http://localhost:8000/cameras/camera1/full-reset +### Full camera reset (hard recovery) +POST http://localhost:8000/cameras/camera2/full-reset +# Response: CameraRecoveryResponse (same structure as reconnect) + +### + +### Reinitialize failed camera +POST http://localhost:8000/cameras/camera1/reinitialize +POST http://localhost:8000/cameras/camera2/reinitialize +# Response: CameraRecoveryResponse (same structure as reconnect) + +############################################################################### +# RECORDING SESSIONS ENDPOINT +############################################################################### + +### Get all recording sessions +GET http://localhost:8000/recordings +# Response: Dict[str, RecordingInfoResponse] +# { +# "rec_001": { +# "camera_name": "camera1", +# "filename": "20250728_120000_test.avi", +# "start_time": "2025-07-28T12:00:00", +# "state": "completed", +# "end_time": "2025-07-28T12:05:00", +# "file_size_bytes": 1048576, +# "frame_count": 1500, +# "duration_seconds": 300.0, +# "error_message": null +# } +# } + +############################################################################### +# STORAGE ENDPOINTS +############################################################################### + +### Get storage statistics +GET http://localhost:8000/storage/stats +# Response: StorageStatsResponse +# { +# "base_path": "/storage", +# "total_files": 25, +# "total_size_bytes": 52428800, +# "cameras": { +# "camera1": { +# "file_count": 15, +# "total_size_bytes": 31457280 +# } +# }, +# "disk_usage": { +# "total": 1000000000, +# "used": 500000000, +# "free": 500000000 +# } +# } + +### + +### Get recording files list (with filters) +POST http://localhost:8000/storage/files +Content-Type: application/json + +{ + "camera_name": "camera1", + "start_date": "2025-07-25T00:00:00", + "end_date": "2025-07-28T23:59:59", + "limit": 50 +} +# Request Parameters (all optional): +# - camera_name: string - Filter by specific camera +# - start_date: string (ISO format) - Filter files from this date +# - end_date: string (ISO format) - Filter files until this date +# - limit: integer (max 1000, default 100) - Maximum number of files to return +# +# Response: FileListResponse +# { +# "files": [ +# { +# "filename": "20250728_120000_test.avi", +# "camera_name": "camera1", +# "file_size_bytes": 1048576, +# "created_date": "2025-07-28T12:00:00", +# "duration_seconds": 300.0 +# } +# ], +# "total_count": 1 +# } + +### + +### Get all files (no camera filter) +POST http://localhost:8000/storage/files +Content-Type: application/json + +{ + "limit": 100 +} + +### + +### Cleanup old storage files +POST http://localhost:8000/storage/cleanup +Content-Type: application/json + +{ + "max_age_days": 7 +} +# Request Parameters: +# - max_age_days: integer (optional) - Remove files older than this many days +# If not provided, uses config default (30 days) +# +# Response: CleanupResponse +# { +# "files_removed": 5, +# "bytes_freed": 10485760, +# "errors": [] +# } + +############################################################################### +# ERROR RESPONSES +############################################################################### +# All endpoints may return ErrorResponse on failure: +# { +# "error": "Error description", +# "details": "Additional error details", +# "timestamp": "2025-07-28T12:00:00" +# } +# Common HTTP status codes: +# - 200: Success +# - 400: Bad Request (invalid parameters) +# - 404: Not Found (camera/resource not found) +# - 500: Internal Server Error +# - 503: Service Unavailable (camera manager not available) + +############################################################################### +# NOTES +############################################################################### +# 1. All timestamps are in ISO 8601 format +# 2. File sizes are in bytes +# 3. Camera names: "camera1", "camera2" +# 4. Machine names: "vibratory_conveyor", "blower_separator" +# 5. FPS behavior: +# - fps > 0: Capture at specified frame rate +# - fps = 0: Capture at MAXIMUM possible speed (no delay) +# - fps omitted: Uses camera config default +# 6. Filenames automatically get datetime prefix: YYYYMMDD_HHMMSS_filename.avi +# 7. Recovery endpoints should be used in order: test-connection → reconnect → restart-grab → full-reset → reinitialize + + + +### Start streaming for camera1 +curl -X POST http://localhost:8000/cameras/camera1/start-stream + +# View live stream (open in browser) +# http://localhost:8000/cameras/camera1/stream + +### Stop streaming +curl -X POST http://localhost:8000/cameras/camera1/stop-stream \ No newline at end of file diff --git a/api/ai_agent/references/api-tests.http b/api/ai_agent/references/api-tests.http new file mode 100644 index 0000000..f447e90 --- /dev/null +++ b/api/ai_agent/references/api-tests.http @@ -0,0 +1,308 @@ +### Get system status +GET http://localhost:8000/system/status + +### + +### Get camera1 status +GET http://localhost:8000/cameras/camera1/status + +### + +### Get camera2 status +GET http://localhost:8000/cameras/camera2/status + +### +### RECORDING TESTS +### Note: All filenames will automatically have datetime prefix added +### Format: YYYYMMDD_HHMMSS_filename.avi (or auto-generated if no filename) +### +### FPS Behavior: +### - fps > 0: Capture at specified frame rate +### - fps = 0: Capture at MAXIMUM possible speed (no delay between frames) +### - fps omitted: Uses camera config default (usually 3.0 fps) +### - Video files saved with 30 FPS metadata when fps=0 for proper playback +### + +### Start recording camera1 (basic) +POST http://localhost:8000/cameras/camera1/start-recording +Content-Type: application/json + +{ + "filename": "manual22_test_cam1.avi" +} + +### + +### Start recording camera1 (with camera settings) +POST http://localhost:8000/cameras/camera1/start-recording +Content-Type: application/json + +{ + "filename": "test_with_settings.avi", + "exposure_ms": 2.0, + "gain": 4.0, + "fps": 0 +} + +### + +### Start recording camera2 (basic) +POST http://localhost:8000/cameras/camera2/start-recording +Content-Type: application/json + +{ + "filename": "manual_test_cam2.avi" +} + +### + +### Start recording camera2 (with different settings) +POST http://localhost:8000/cameras/camera2/start-recording +Content-Type: application/json + +{ + "filename": "high_fps_test.avi", + "exposure_ms": 0.5, + "gain": 2.5, + "fps": 10.0 +} + +### + +### Start recording camera1 (no filename, only settings) +POST http://localhost:8000/cameras/camera1/start-recording +Content-Type: application/json + +{ + "exposure_ms": 1.5, + "gain": 3.0, + "fps": 7.0 +} + +### + +### Start recording camera1 (only filename, no settings) +POST http://localhost:8000/cameras/camera1/start-recording +Content-Type: application/json + +{ + "filename": "just_filename_test.avi" +} + +### + +### Start recording camera2 (only exposure setting) +POST http://localhost:8000/cameras/camera2/start-recording +Content-Type: application/json + +{ + "exposure_ms": 3.0 +} + +### + +### Start recording camera1 (only gain setting) +POST http://localhost:8000/cameras/camera1/start-recording +Content-Type: application/json + +{ + "gain": 5.5 +} + +### + +### Start recording camera2 (only fps setting) +POST http://localhost:8000/cameras/camera2/start-recording +Content-Type: application/json + +{ + "fps": 15.0 +} + +### + +### Start recording camera1 (maximum fps - no delay) +POST http://localhost:8000/cameras/camera1/start-recording +Content-Type: application/json + +{ + "filename": "max_fps_test.avi", + "fps": 0 +} + +### + +### Start recording camera2 (maximum fps with settings) +POST http://localhost:8000/cameras/camera2/start-recording +Content-Type: application/json + +{ + "filename": "max_fps_low_exposure.avi", + "exposure_ms": 0.1, + "gain": 1.0, + "fps": 0 +} + +### + +### Start recording camera1 (empty body - all defaults) +POST http://localhost:8000/cameras/camera1/start-recording +Content-Type: application/json + +{} + +### + +### Stop camera1 recording +POST http://localhost:8000/cameras/camera1/stop-recording + +### + +### Stop camera2 recording +POST http://localhost:8000/cameras/camera2/stop-recording + +### +### SYSTEM STATUS AND STORAGE TESTS +### + +### Get all cameras status +GET http://localhost:8000/cameras + +### + +### Get storage statistics +GET http://localhost:8000/storage/stats + +### + +### Get storage files list +POST http://localhost:8000/storage/files +Content-Type: application/json + +{ + "camera_name": "camera1", + "limit": 10 +} + +### + +### Get storage files list (all cameras) +POST http://localhost:8000/storage/files +Content-Type: application/json + +{ + "limit": 20 +} + +### + +### Health check +GET http://localhost:8000/health + +### +### CAMERA RECOVERY AND DIAGNOSTICS TESTS +### +### These endpoints help recover cameras that have failed to initialize or lost connection. +### +### Recovery Methods (in order of severity): +### 1. test-connection: Test if camera connection is working +### 2. reconnect: Soft reconnection using CameraReConnect() +### 3. restart-grab: Restart grab process using CameraRestartGrab() +### 4. reset-timestamp: Reset camera timestamp using CameraRstTimeStamp() +### 5. full-reset: Hard reset - uninitialize and reinitialize camera +### 6. reinitialize: Complete reinitialization for cameras that never initialized +### +### Recommended troubleshooting order: +### 1. Start with test-connection to diagnose the issue +### 2. Try reconnect first (most common fix) +### 3. If reconnect fails, try restart-grab +### 4. If still failing, try full-reset +### 5. Use reinitialize only for cameras that failed initial setup +### + +### Test camera1 connection +POST http://localhost:8000/cameras/camera1/test-connection + +### + +### Test camera2 connection +POST http://localhost:8000/cameras/camera2/test-connection + +### + +### Reconnect camera1 (soft recovery) +POST http://localhost:8000/cameras/camera1/reconnect + +### + +### Reconnect camera2 (soft recovery) +POST http://localhost:8000/cameras/camera2/reconnect + +### + +### Restart camera1 grab process +POST http://localhost:8000/cameras/camera1/restart-grab + +### + +### Restart camera2 grab process +POST http://localhost:8000/cameras/camera2/restart-grab + +### + +### Reset camera1 timestamp +POST http://localhost:8000/cameras/camera1/reset-timestamp + +### + +### Reset camera2 timestamp +POST http://localhost:8000/cameras/camera2/reset-timestamp + +### + +### Full reset camera1 (hard recovery - uninitialize and reinitialize) +POST http://localhost:8000/cameras/camera1/full-reset + +### + +### Full reset camera2 (hard recovery - uninitialize and reinitialize) +POST http://localhost:8000/cameras/camera2/full-reset + +### + +### Reinitialize camera1 (for cameras that failed to initialize) +POST http://localhost:8000/cameras/camera1/reinitialize + +### + +### Reinitialize camera2 (for cameras that failed to initialize) +POST http://localhost:8000/cameras/camera2/reinitialize + +### +### RECOVERY WORKFLOW EXAMPLES +### + +### Example 1: Basic troubleshooting workflow for camera1 +### Step 1: Test connection +POST http://localhost:8000/cameras/camera1/test-connection + +### Step 2: If test fails, try reconnect +# POST http://localhost:8000/cameras/camera1/reconnect + +### Step 3: If reconnect fails, try restart grab +# POST http://localhost:8000/cameras/camera1/restart-grab + +### Step 4: If still failing, try full reset +# POST http://localhost:8000/cameras/camera1/full-reset + +### Step 5: If camera never initialized, try reinitialize +# POST http://localhost:8000/cameras/camera1/reinitialize + +### + +### Example 2: Quick recovery sequence for camera2 +### Try reconnect first (most common fix) +POST http://localhost:8000/cameras/camera2/reconnect + +### If that doesn't work, try full reset +# POST http://localhost:8000/cameras/camera2/full-reset \ No newline at end of file diff --git a/api/ai_agent/references/camera-api.types.ts b/api/ai_agent/references/camera-api.types.ts new file mode 100644 index 0000000..3610ac8 --- /dev/null +++ b/api/ai_agent/references/camera-api.types.ts @@ -0,0 +1,367 @@ +/** + * TypeScript definitions for USDA Vision Camera System API + * + * This file provides complete type definitions for AI assistants + * to integrate the camera streaming functionality into React/TypeScript projects. + */ + +// ============================================================================= +// BASE CONFIGURATION +// ============================================================================= + +export interface ApiConfig { + baseUrl: string; + timeout?: number; + refreshInterval?: number; +} + +export const defaultApiConfig: ApiConfig = { + baseUrl: 'http://vision:8000', // Production default, change to 'http://localhost:8000' for development + timeout: 10000, + refreshInterval: 30000, +}; + +// ============================================================================= +// CAMERA TYPES +// ============================================================================= + +export interface CameraDeviceInfo { + friendly_name?: string; + port_type?: string; + serial_number?: string; + device_index?: number; + error?: string; +} + +export interface CameraInfo { + name: string; + status: 'connected' | 'disconnected' | 'error' | 'not_found' | 'available'; + is_recording: boolean; + last_checked: string; // ISO date string + last_error?: string | null; + device_info?: CameraDeviceInfo; + current_recording_file?: string | null; + recording_start_time?: string | null; // ISO date string +} + +export interface CameraListResponse { + [cameraName: string]: CameraInfo; +} + +// ============================================================================= +// STREAMING TYPES +// ============================================================================= + +export interface StreamStartRequest { + // No body required - camera name is in URL path +} + +export interface StreamStartResponse { + success: boolean; + message: string; +} + +export interface StreamStopRequest { + // No body required - camera name is in URL path +} + +export interface StreamStopResponse { + success: boolean; + message: string; +} + +export interface StreamStatus { + isStreaming: boolean; + streamUrl?: string; + error?: string; +} + +// ============================================================================= +// RECORDING TYPES +// ============================================================================= + +export interface StartRecordingRequest { + filename?: string; + exposure_ms?: number; + gain?: number; + fps?: number; +} + +export interface StartRecordingResponse { + success: boolean; + message: string; + filename?: string; +} + +export interface StopRecordingResponse { + success: boolean; + message: string; +} + +// ============================================================================= +// SYSTEM TYPES +// ============================================================================= + +export interface SystemStatusResponse { + status: string; + uptime: string; + api_server_running: boolean; + camera_manager_running: boolean; + mqtt_client_connected: boolean; + total_cameras: number; + active_recordings: number; + active_streams?: number; +} + +export interface HealthResponse { + status: 'healthy' | 'unhealthy'; + timestamp: string; +} + +// ============================================================================= +// ERROR TYPES +// ============================================================================= + +export interface ApiError { + detail: string; + status_code?: number; +} + +export interface StreamError extends Error { + type: 'network' | 'api' | 'stream' | 'timeout'; + cameraName: string; + originalError?: Error; +} + +// ============================================================================= +// HOOK TYPES +// ============================================================================= + +export interface UseCameraStreamResult { + isStreaming: boolean; + loading: boolean; + error: string | null; + startStream: () => Promise<{ success: boolean; error?: string }>; + stopStream: () => Promise<{ success: boolean; error?: string }>; + getStreamUrl: () => string; + refreshStream: () => void; +} + +export interface UseCameraListResult { + cameras: CameraListResponse; + loading: boolean; + error: string | null; + refreshCameras: () => Promise; +} + +export interface UseCameraRecordingResult { + isRecording: boolean; + loading: boolean; + error: string | null; + currentFile: string | null; + startRecording: (options?: StartRecordingRequest) => Promise<{ success: boolean; error?: string }>; + stopRecording: () => Promise<{ success: boolean; error?: string }>; +} + +// ============================================================================= +// COMPONENT PROPS TYPES +// ============================================================================= + +export interface CameraStreamProps { + cameraName: string; + apiConfig?: ApiConfig; + autoStart?: boolean; + onStreamStart?: (cameraName: string) => void; + onStreamStop?: (cameraName: string) => void; + onError?: (error: StreamError) => void; + className?: string; + style?: React.CSSProperties; +} + +export interface CameraDashboardProps { + apiConfig?: ApiConfig; + cameras?: string[]; // If provided, only show these cameras + showRecordingControls?: boolean; + showStreamingControls?: boolean; + refreshInterval?: number; + onCameraSelect?: (cameraName: string) => void; + className?: string; +} + +export interface CameraControlsProps { + cameraName: string; + apiConfig?: ApiConfig; + showRecording?: boolean; + showStreaming?: boolean; + onAction?: (action: 'start-stream' | 'stop-stream' | 'start-recording' | 'stop-recording', cameraName: string) => void; +} + +// ============================================================================= +// API CLIENT TYPES +// ============================================================================= + +export interface CameraApiClient { + // System endpoints + getHealth(): Promise; + getSystemStatus(): Promise; + + // Camera endpoints + getCameras(): Promise; + getCameraStatus(cameraName: string): Promise; + testCameraConnection(cameraName: string): Promise<{ success: boolean; message: string }>; + + // Streaming endpoints + startStream(cameraName: string): Promise; + stopStream(cameraName: string): Promise; + getStreamUrl(cameraName: string): string; + + // Recording endpoints + startRecording(cameraName: string, options?: StartRecordingRequest): Promise; + stopRecording(cameraName: string): Promise; +} + +// ============================================================================= +// UTILITY TYPES +// ============================================================================= + +export type CameraAction = 'start-stream' | 'stop-stream' | 'start-recording' | 'stop-recording' | 'test-connection'; + +export interface CameraActionResult { + success: boolean; + message: string; + error?: string; +} + +export interface StreamingState { + [cameraName: string]: { + isStreaming: boolean; + isLoading: boolean; + error: string | null; + lastStarted?: Date; + }; +} + +export interface RecordingState { + [cameraName: string]: { + isRecording: boolean; + isLoading: boolean; + error: string | null; + currentFile: string | null; + startTime?: Date; + }; +} + +// ============================================================================= +// EVENT TYPES +// ============================================================================= + +export interface CameraEvent { + type: 'stream-started' | 'stream-stopped' | 'stream-error' | 'recording-started' | 'recording-stopped' | 'recording-error'; + cameraName: string; + timestamp: Date; + data?: any; +} + +export type CameraEventHandler = (event: CameraEvent) => void; + +// ============================================================================= +// CONFIGURATION TYPES +// ============================================================================= + +export interface StreamConfig { + fps: number; + quality: number; // 1-100 + timeout: number; + retryAttempts: number; + retryDelay: number; +} + +export interface CameraStreamConfig extends StreamConfig { + cameraName: string; + autoReconnect: boolean; + maxReconnectAttempts: number; +} + +// ============================================================================= +// CONTEXT TYPES (for React Context) +// ============================================================================= + +export interface CameraContextValue { + cameras: CameraListResponse; + streamingState: StreamingState; + recordingState: RecordingState; + apiClient: CameraApiClient; + + // Actions + startStream: (cameraName: string) => Promise; + stopStream: (cameraName: string) => Promise; + startRecording: (cameraName: string, options?: StartRecordingRequest) => Promise; + stopRecording: (cameraName: string) => Promise; + refreshCameras: () => Promise; + + // State + loading: boolean; + error: string | null; +} + +// ============================================================================= +// EXAMPLE USAGE TYPES +// ============================================================================= + +/** + * Example usage in React component: + * + * ```typescript + * import { CameraStreamProps, UseCameraStreamResult } from './camera-api.types'; + * + * const CameraStream: React.FC = ({ + * cameraName, + * apiConfig = defaultApiConfig, + * autoStart = false, + * onStreamStart, + * onStreamStop, + * onError + * }) => { + * const { + * isStreaming, + * loading, + * error, + * startStream, + * stopStream, + * getStreamUrl + * }: UseCameraStreamResult = useCameraStream(cameraName, apiConfig); + * + * // Component implementation... + * }; + * ``` + */ + +/** + * Example API client usage: + * + * ```typescript + * const apiClient: CameraApiClient = new CameraApiClientImpl(defaultApiConfig); + * + * // Start streaming + * const result = await apiClient.startStream('camera1'); + * if (result.success) { + * const streamUrl = apiClient.getStreamUrl('camera1'); + * // Use streamUrl in img tag + * } + * ``` + */ + +/** + * Example hook usage: + * + * ```typescript + * const MyComponent = () => { + * const { cameras, loading, error, refreshCameras } = useCameraList(); + * const { isStreaming, startStream, stopStream } = useCameraStream('camera1'); + * + * // Component logic... + * }; + * ``` + */ + +export default {}; diff --git a/api/ai_agent/references/streaming-api.http b/api/ai_agent/references/streaming-api.http new file mode 100644 index 0000000..c85a89c --- /dev/null +++ b/api/ai_agent/references/streaming-api.http @@ -0,0 +1,543 @@ +### USDA Vision Camera Streaming API +### +### CONFIGURATION: +### - Production: http://vision:8000 (requires hostname setup) +### - Development: http://localhost:8000 +### - Custom: Update @baseUrl below to match your setup +### +### This file contains streaming-specific API endpoints for live camera preview +### Use with VS Code REST Client extension or similar tools. + +# Base URL - Update to match your configuration +@baseUrl = http://vision:8000 +# Alternative: @baseUrl = http://localhost:8000 + +### ============================================================================= +### STREAMING ENDPOINTS (NEW FUNCTIONALITY) +### ============================================================================= + +### Start camera streaming for live preview +### This creates a separate camera connection that doesn't interfere with recording +POST {{baseUrl}}/cameras/camera1/start-stream +Content-Type: application/json + +### Expected Response: +# { +# "success": true, +# "message": "Started streaming for camera camera1" +# } + +### + +### Stop camera streaming +POST {{baseUrl}}/cameras/camera1/stop-stream +Content-Type: application/json + +### Expected Response: +# { +# "success": true, +# "message": "Stopped streaming for camera camera1" +# } + +### + +### Get live MJPEG stream (open in browser or use as img src) +### This endpoint returns a continuous MJPEG stream +### Content-Type: multipart/x-mixed-replace; boundary=frame +GET {{baseUrl}}/cameras/camera1/stream + +### Usage in HTML: +# Live Stream + +### Usage in React: +# + +### + +### Start streaming for camera2 +POST {{baseUrl}}/cameras/camera2/start-stream +Content-Type: application/json + +### + +### Get live stream for camera2 +GET {{baseUrl}}/cameras/camera2/stream + +### + +### Stop streaming for camera2 +POST {{baseUrl}}/cameras/camera2/stop-stream +Content-Type: application/json + +### ============================================================================= +### CONCURRENT OPERATIONS TESTING +### ============================================================================= + +### Test Scenario: Streaming + Recording Simultaneously +### This demonstrates that streaming doesn't block recording + +### Step 1: Start streaming first +POST {{baseUrl}}/cameras/camera1/start-stream +Content-Type: application/json + +### + +### Step 2: Start recording (while streaming continues) +POST {{baseUrl}}/cameras/camera1/start-recording +Content-Type: application/json + +{ + "filename": "concurrent_test.avi" +} + +### + +### Step 3: Check both are running +GET {{baseUrl}}/cameras/camera1 + +### Expected Response shows both recording and streaming active: +# { +# "camera1": { +# "name": "camera1", +# "status": "connected", +# "is_recording": true, +# "current_recording_file": "concurrent_test.avi", +# "recording_start_time": "2025-01-28T10:30:00.000Z" +# } +# } + +### + +### Step 4: Stop recording (streaming continues) +POST {{baseUrl}}/cameras/camera1/stop-recording +Content-Type: application/json + +### + +### Step 5: Verify streaming still works +GET {{baseUrl}}/cameras/camera1/stream + +### + +### Step 6: Stop streaming +POST {{baseUrl}}/cameras/camera1/stop-stream +Content-Type: application/json + +### ============================================================================= +### MULTIPLE CAMERA STREAMING +### ============================================================================= + +### Start streaming on multiple cameras simultaneously +POST {{baseUrl}}/cameras/camera1/start-stream +Content-Type: application/json + +### + +POST {{baseUrl}}/cameras/camera2/start-stream +Content-Type: application/json + +### + +### Check status of all cameras +GET {{baseUrl}}/cameras + +### + +### Access multiple streams (open in separate browser tabs) +GET {{baseUrl}}/cameras/camera1/stream + +### + +GET {{baseUrl}}/cameras/camera2/stream + +### + +### Stop all streaming +POST {{baseUrl}}/cameras/camera1/stop-stream +Content-Type: application/json + +### + +POST {{baseUrl}}/cameras/camera2/stop-stream +Content-Type: application/json + +### ============================================================================= +### ERROR TESTING +### ============================================================================= + +### Test with invalid camera name +POST {{baseUrl}}/cameras/invalid_camera/start-stream +Content-Type: application/json + +### Expected Response: +# { +# "detail": "Camera streamer not found: invalid_camera" +# } + +### + +### Test stream endpoint without starting stream first +GET {{baseUrl}}/cameras/camera1/stream + +### Expected: May return error or empty stream depending on camera state + +### + +### Test starting stream when camera is in error state +POST {{baseUrl}}/cameras/camera1/start-stream +Content-Type: application/json + +### If camera has issues, expected response: +# { +# "success": false, +# "message": "Failed to start streaming for camera camera1" +# } + +### ============================================================================= +### INTEGRATION EXAMPLES FOR AI ASSISTANTS +### ============================================================================= + +### React Component Integration: +# const CameraStream = ({ cameraName }) => { +# const [isStreaming, setIsStreaming] = useState(false); +# +# const startStream = async () => { +# const response = await fetch(`${baseUrl}/cameras/${cameraName}/start-stream`, { +# method: 'POST' +# }); +# if (response.ok) { +# setIsStreaming(true); +# } +# }; +# +# return ( +#
+# +# {isStreaming && ( +# +# )} +#
+# ); +# }; + +### JavaScript Fetch Example: +# const streamAPI = { +# async startStream(cameraName) { +# const response = await fetch(`${baseUrl}/cameras/${cameraName}/start-stream`, { +# method: 'POST', +# headers: { 'Content-Type': 'application/json' } +# }); +# return response.json(); +# }, +# +# async stopStream(cameraName) { +# const response = await fetch(`${baseUrl}/cameras/${cameraName}/stop-stream`, { +# method: 'POST', +# headers: { 'Content-Type': 'application/json' } +# }); +# return response.json(); +# }, +# +# getStreamUrl(cameraName) { +# return `${baseUrl}/cameras/${cameraName}/stream?t=${Date.now()}`; +# } +# }; + +### Vue.js Integration: +# +# +# + +### ============================================================================= +### TROUBLESHOOTING +### ============================================================================= + +### If streams don't start: +# 1. Check camera status: GET /cameras +# 2. Verify system health: GET /health +# 3. Test camera connection: POST /cameras/{name}/test-connection +# 4. Check if camera is already recording (shouldn't matter, but good to know) + +### If stream image doesn't load: +# 1. Verify stream was started: POST /cameras/{name}/start-stream +# 2. Check browser console for CORS errors +# 3. Try accessing stream URL directly in browser +# 4. Add timestamp to prevent caching: ?t=${Date.now()} + +### If concurrent operations fail: +# 1. This should work - streaming and recording use separate connections +# 2. Check system logs for resource conflicts +# 3. Verify sufficient system resources (CPU/Memory) +# 4. Test with one camera first, then multiple + +### Performance Notes: +# - Streaming uses ~10 FPS by default (configurable) +# - JPEG quality set to 70% (configurable) +# - Each stream uses additional CPU/memory +# - Multiple concurrent streams may impact performance + +### ============================================================================= +### CAMERA CONFIGURATION ENDPOINTS (NEW) +### ============================================================================= + +### Get camera configuration +GET {{baseUrl}}/cameras/camera1/config + +### Expected Response: +# { +# "name": "camera1", +# "machine_topic": "vibratory_conveyor", +# "storage_path": "/storage/camera1", +# "enabled": true, +# "auto_start_recording_enabled": true, +# "auto_recording_max_retries": 3, +# "auto_recording_retry_delay_seconds": 2, +# "exposure_ms": 1.0, +# "gain": 3.5, +# "target_fps": 0, +# "sharpness": 120, +# "contrast": 110, +# "saturation": 100, +# "gamma": 100, +# "noise_filter_enabled": true, +# "denoise_3d_enabled": false, +# "auto_white_balance": true, +# "color_temperature_preset": 0, +# "wb_red_gain": 1.0, +# "wb_green_gain": 1.0, +# "wb_blue_gain": 1.0, +# "anti_flicker_enabled": true, +# "light_frequency": 1, +# "bit_depth": 8, +# "hdr_enabled": false, +# "hdr_gain_mode": 0 +# } + +### + +### Update basic camera settings (real-time, no restart required) +PUT {{baseUrl}}/cameras/camera1/config +Content-Type: application/json + +{ + "exposure_ms": 2.0, + "gain": 4.0, + "target_fps": 10.0 +} + +### + +### Update image quality settings +PUT {{baseUrl}}/cameras/camera1/config +Content-Type: application/json + +{ + "sharpness": 150, + "contrast": 120, + "saturation": 110, + "gamma": 90 +} + +### + +### Update advanced settings +PUT {{baseUrl}}/cameras/camera1/config +Content-Type: application/json + +{ + "anti_flicker_enabled": true, + "light_frequency": 1, + "auto_white_balance": false, + "color_temperature_preset": 2 +} + +### + +### Update white balance RGB gains (manual white balance) +PUT {{baseUrl}}/cameras/camera1/config +Content-Type: application/json + +{ + "auto_white_balance": false, + "wb_red_gain": 1.2, + "wb_green_gain": 1.0, + "wb_blue_gain": 0.8 +} + +### + +### Enable HDR mode +PUT {{baseUrl}}/cameras/camera1/config +Content-Type: application/json + +{ + "hdr_enabled": true, + "hdr_gain_mode": 1 +} + +### + +### Update noise reduction settings (requires restart) +PUT {{baseUrl}}/cameras/camera1/config +Content-Type: application/json + +{ + "noise_filter_enabled": false, + "denoise_3d_enabled": true +} + +### + +### Apply configuration (restart camera with new settings) +POST {{baseUrl}}/cameras/camera1/apply-config + +### Expected Response: +# { +# "success": true, +# "message": "Configuration applied to camera camera1" +# } + +### + +### Get camera2 configuration +GET {{baseUrl}}/cameras/camera2/config + +### + +### Update camera2 for outdoor lighting +PUT {{baseUrl}}/cameras/camera2/config +Content-Type: application/json + +{ + "exposure_ms": 0.5, + "gain": 2.0, + "sharpness": 130, + "contrast": 115, + "anti_flicker_enabled": true, + "light_frequency": 1 +} + +### ============================================================================= +### CONFIGURATION TESTING SCENARIOS +### ============================================================================= + +### Scenario 1: Low light optimization +PUT {{baseUrl}}/cameras/camera1/config +Content-Type: application/json + +{ + "exposure_ms": 5.0, + "gain": 8.0, + "noise_filter_enabled": true, + "denoise_3d_enabled": true +} + +### + +### Scenario 2: High speed capture +PUT {{baseUrl}}/cameras/camera1/config +Content-Type: application/json + +{ + "exposure_ms": 0.2, + "gain": 1.0, + "target_fps": 30.0, + "sharpness": 180 +} + +### + +### Scenario 3: Color accuracy for food inspection +PUT {{baseUrl}}/cameras/camera1/config +Content-Type: application/json + +{ + "auto_white_balance": false, + "color_temperature_preset": 1, + "saturation": 120, + "contrast": 105, + "gamma": 95 +} + +### + +### Scenario 4: HDR for high contrast scenes +PUT {{baseUrl}}/cameras/camera1/config +Content-Type: application/json + +{ + "hdr_enabled": true, + "hdr_gain_mode": 2, + "exposure_ms": 1.0, + "gain": 3.0 +} + +### ============================================================================= +### ERROR TESTING FOR CONFIGURATION +### ============================================================================= + +### Test invalid camera name +GET {{baseUrl}}/cameras/invalid_camera/config + +### + +### Test invalid exposure range +PUT {{baseUrl}}/cameras/camera1/config +Content-Type: application/json + +{ + "exposure_ms": 2000.0 +} + +### Expected: HTTP 422 validation error + +### + +### Test invalid gain range +PUT {{baseUrl}}/cameras/camera1/config +Content-Type: application/json + +{ + "gain": 50.0 +} + +### Expected: HTTP 422 validation error + +### + +### Test empty configuration update +PUT {{baseUrl}}/cameras/camera1/config +Content-Type: application/json + +{} + +### Expected: HTTP 400 "No configuration updates provided" diff --git a/api/camera_preview.html b/api/camera_preview.html new file mode 100644 index 0000000..99d321e --- /dev/null +++ b/api/camera_preview.html @@ -0,0 +1,336 @@ + + + + + + USDA Vision Camera Live Preview + + + +
+

🎥 USDA Vision Camera Live Preview

+ +
+ +
+ +
+

📡 System Information

+
Loading system status...
+ +

🔗 API Endpoints

+
+

Live Stream: GET /cameras/{camera_name}/stream

+

Start Stream: POST /cameras/{camera_name}/start-stream

+

Stop Stream: POST /cameras/{camera_name}/stop-stream

+

Camera Status: GET /cameras

+
+
+
+ + + + diff --git a/api/camera_sdk/README.md b/api/camera_sdk/README.md new file mode 100644 index 0000000..c507622 --- /dev/null +++ b/api/camera_sdk/README.md @@ -0,0 +1,66 @@ +# Camera SDK Library + +This directory contains the core GigE camera SDK library required for the USDA Vision Camera System. + +## Contents + +### Core SDK Library +- **`mvsdk.py`** - Python wrapper for the GigE camera SDK + - Provides Python bindings for camera control functions + - Handles camera initialization, configuration, and image capture + - **Critical dependency** - Required for all camera operations + +## Important Notes + +⚠️ **This is NOT demo code** - This directory contains the core SDK library that the entire system depends on for camera functionality. + +### SDK Library Details +- The `mvsdk.py` file is a Python wrapper around the native camera SDK +- It provides ctypes bindings to the underlying C/C++ camera library +- Contains all camera control functions, constants, and data structures +- Used by all camera modules in `usda_vision_system/camera/` + +### Dependencies +- Requires the native camera SDK library (`libMVSDK.so` on Linux) +- The native library should be installed system-wide or available in the library path + +## Usage + +This SDK is automatically imported by the camera modules: +```python +# Imported by camera modules +import sys +import os +sys.path.append(os.path.join(os.path.dirname(__file__), "..", "..", "camera_sdk")) +import mvsdk +``` + +## Demo Code + +For camera usage examples and demo code, see the `../demos/` directory: +- `cv_grab.py` - Basic camera capture example +- `cv_grab2.py` - Multi-camera capture example +- `cv_grab_callback.py` - Callback-based capture example +- `grab.py` - Simple image capture example + +## Troubleshooting + +If you encounter camera SDK issues: + +1. **Check SDK Installation**: + ```bash + ls -la camera_sdk/mvsdk.py + ``` + +2. **Test SDK Import**: + ```bash + python -c "import sys; sys.path.append('./camera_sdk'); import mvsdk; print('SDK imported successfully')" + ``` + +3. **Check Native Library**: + ```bash + # On Linux + ldconfig -p | grep MVSDK + ``` + +For more troubleshooting, see the main [README.md](../README.md#troubleshooting). diff --git a/api/camera_sdk/mvsdk.py b/api/camera_sdk/mvsdk.py new file mode 100644 index 0000000..6a3af90 --- /dev/null +++ b/api/camera_sdk/mvsdk.py @@ -0,0 +1,2454 @@ +#coding=utf-8 +import platform +from ctypes import * +from threading import local + +# 回调函数类型 +CALLBACK_FUNC_TYPE = None + +# SDK动态库 +_sdk = None + +def _Init(): + global _sdk + global CALLBACK_FUNC_TYPE + + is_win = (platform.system() == "Windows") + is_x86 = (platform.architecture()[0] == '32bit') + + if is_win: + _sdk = windll.MVCAMSDK if is_x86 else windll.MVCAMSDK_X64 + CALLBACK_FUNC_TYPE = WINFUNCTYPE + else: + _sdk = cdll.LoadLibrary("libMVSDK.so") + CALLBACK_FUNC_TYPE = CFUNCTYPE + +_Init() + +#-------------------------------------------类型定义-------------------------------------------------- + +# 状态码定义 +CAMERA_STATUS_SUCCESS = 0 # 操作成功 +CAMERA_STATUS_FAILED = -1 # 操作失败 +CAMERA_STATUS_INTERNAL_ERROR = -2 # 内部错误 +CAMERA_STATUS_UNKNOW = -3 # 未知错误 +CAMERA_STATUS_NOT_SUPPORTED = -4 # 不支持该功能 +CAMERA_STATUS_NOT_INITIALIZED = -5 # 初始化未完成 +CAMERA_STATUS_PARAMETER_INVALID = -6 # 参数无效 +CAMERA_STATUS_PARAMETER_OUT_OF_BOUND = -7 # 参数越界 +CAMERA_STATUS_UNENABLED = -8 # 未使能 +CAMERA_STATUS_USER_CANCEL = -9 # 用户手动取消了,比如roi面板点击取消,返回 +CAMERA_STATUS_PATH_NOT_FOUND = -10 # 注册表中没有找到对应的路径 +CAMERA_STATUS_SIZE_DISMATCH = -11 # 获得图像数据长度和定义的尺寸不匹配 +CAMERA_STATUS_TIME_OUT = -12 # 超时错误 +CAMERA_STATUS_IO_ERROR = -13 # 硬件IO错误 +CAMERA_STATUS_COMM_ERROR = -14 # 通讯错误 +CAMERA_STATUS_BUS_ERROR = -15 # 总线错误 +CAMERA_STATUS_NO_DEVICE_FOUND = -16 # 没有发现设备 +CAMERA_STATUS_NO_LOGIC_DEVICE_FOUND = -17 # 未找到逻辑设备 +CAMERA_STATUS_DEVICE_IS_OPENED = -18 # 设备已经打开 +CAMERA_STATUS_DEVICE_IS_CLOSED = -19 # 设备已经关闭 +CAMERA_STATUS_DEVICE_VEDIO_CLOSED = -20 # 没有打开设备视频,调用录像相关的函数时,如果相机视频没有打开,则回返回该错误。 +CAMERA_STATUS_NO_MEMORY = -21 # 没有足够系统内存 +CAMERA_STATUS_FILE_CREATE_FAILED = -22 # 创建文件失败 +CAMERA_STATUS_FILE_INVALID = -23 # 文件格式无效 +CAMERA_STATUS_WRITE_PROTECTED = -24 # 写保护,不可写 +CAMERA_STATUS_GRAB_FAILED = -25 # 数据采集失败 +CAMERA_STATUS_LOST_DATA = -26 # 数据丢失,不完整 +CAMERA_STATUS_EOF_ERROR = -27 # 未接收到帧结束符 +CAMERA_STATUS_BUSY = -28 # 正忙(上一次操作还在进行中),此次操作不能进行 +CAMERA_STATUS_WAIT = -29 # 需要等待(进行操作的条件不成立),可以再次尝试 +CAMERA_STATUS_IN_PROCESS = -30 # 正在进行,已经被操作过 +CAMERA_STATUS_IIC_ERROR = -31 # IIC传输错误 +CAMERA_STATUS_SPI_ERROR = -32 # SPI传输错误 +CAMERA_STATUS_USB_CONTROL_ERROR = -33 # USB控制传输错误 +CAMERA_STATUS_USB_BULK_ERROR = -34 # USB BULK传输错误 +CAMERA_STATUS_SOCKET_INIT_ERROR = -35 # 网络传输套件初始化失败 +CAMERA_STATUS_GIGE_FILTER_INIT_ERROR = -36 # 网络相机内核过滤驱动初始化失败,请检查是否正确安装了驱动,或者重新安装。 +CAMERA_STATUS_NET_SEND_ERROR = -37 # 网络数据发送错误 +CAMERA_STATUS_DEVICE_LOST = -38 # 与网络相机失去连接,心跳检测超时 +CAMERA_STATUS_DATA_RECV_LESS = -39 # 接收到的字节数比请求的少 +CAMERA_STATUS_FUNCTION_LOAD_FAILED = -40 # 从文件中加载程序失败 +CAMERA_STATUS_CRITICAL_FILE_LOST = -41 # 程序运行所必须的文件丢失。 +CAMERA_STATUS_SENSOR_ID_DISMATCH = -42 # 固件和程序不匹配,原因是下载了错误的固件。 +CAMERA_STATUS_OUT_OF_RANGE = -43 # 参数超出有效范围。 +CAMERA_STATUS_REGISTRY_ERROR = -44 # 安装程序注册错误。请重新安装程序,或者运行安装目录Setup/Installer.exe +CAMERA_STATUS_ACCESS_DENY = -45 # 禁止访问。指定相机已经被其他程序占用时,再申请访问该相机,会返回该状态。(一个相机不能被多个程序同时访问) +#AIA的标准兼容的错误码 +CAMERA_AIA_PACKET_RESEND = 0x0100 #该帧需要重传 +CAMERA_AIA_NOT_IMPLEMENTED = 0x8001 #设备不支持的命令 +CAMERA_AIA_INVALID_PARAMETER = 0x8002 #命令参数非法 +CAMERA_AIA_INVALID_ADDRESS = 0x8003 #不可访问的地址 +CAMERA_AIA_WRITE_PROTECT = 0x8004 #访问的对象不可写 +CAMERA_AIA_BAD_ALIGNMENT = 0x8005 #访问的地址没有按照要求对齐 +CAMERA_AIA_ACCESS_DENIED = 0x8006 #没有访问权限 +CAMERA_AIA_BUSY = 0x8007 #命令正在处理中 +CAMERA_AIA_DEPRECATED = 0x8008 #0x8008-0x0800B 0x800F 该指令已经废弃 +CAMERA_AIA_PACKET_UNAVAILABLE = 0x800C #包无效 +CAMERA_AIA_DATA_OVERRUN = 0x800D #数据溢出,通常是收到的数据比需要的多 +CAMERA_AIA_INVALID_HEADER = 0x800E #数据包头部中某些区域与协议不匹配 +CAMERA_AIA_PACKET_NOT_YET_AVAILABLE = 0x8010 #图像分包数据还未准备好,多用于触发模式,应用程序访问超时 +CAMERA_AIA_PACKET_AND_PREV_REMOVED_FROM_MEMORY = 0x8011 #需要访问的分包已经不存在。多用于重传时数据已经不在缓冲区中 +CAMERA_AIA_PACKET_REMOVED_FROM_MEMORY = 0x8012 #CAMERA_AIA_PACKET_AND_PREV_REMOVED_FROM_MEMORY +CAMERA_AIA_NO_REF_TIME = 0x0813 #没有参考时钟源。多用于时间同步的命令执行时 +CAMERA_AIA_PACKET_TEMPORARILY_UNAVAILABLE = 0x0814 #由于信道带宽问题,当前分包暂时不可用,需稍后进行访问 +CAMERA_AIA_OVERFLOW = 0x0815 #设备端数据溢出,通常是队列已满 +CAMERA_AIA_ACTION_LATE = 0x0816 #命令执行已经超过有效的指定时间 +CAMERA_AIA_ERROR = 0x8FFF #错误 + +# 图像格式定义 +CAMERA_MEDIA_TYPE_MONO = 0x01000000 +CAMERA_MEDIA_TYPE_RGB = 0x02000000 +CAMERA_MEDIA_TYPE_COLOR = 0x02000000 +CAMERA_MEDIA_TYPE_OCCUPY1BIT = 0x00010000 +CAMERA_MEDIA_TYPE_OCCUPY2BIT = 0x00020000 +CAMERA_MEDIA_TYPE_OCCUPY4BIT = 0x00040000 +CAMERA_MEDIA_TYPE_OCCUPY8BIT = 0x00080000 +CAMERA_MEDIA_TYPE_OCCUPY10BIT = 0x000A0000 +CAMERA_MEDIA_TYPE_OCCUPY12BIT = 0x000C0000 +CAMERA_MEDIA_TYPE_OCCUPY16BIT = 0x00100000 +CAMERA_MEDIA_TYPE_OCCUPY24BIT = 0x00180000 +CAMERA_MEDIA_TYPE_OCCUPY32BIT = 0x00200000 +CAMERA_MEDIA_TYPE_OCCUPY36BIT = 0x00240000 +CAMERA_MEDIA_TYPE_OCCUPY48BIT = 0x00300000 +CAMERA_MEDIA_TYPE_EFFECTIVE_PIXEL_SIZE_MASK = 0x00FF0000 +CAMERA_MEDIA_TYPE_EFFECTIVE_PIXEL_SIZE_SHIFT = 16 +CAMERA_MEDIA_TYPE_ID_MASK = 0x0000FFFF +CAMERA_MEDIA_TYPE_COUNT = 0x46 + +#mono +CAMERA_MEDIA_TYPE_MONO1P = (CAMERA_MEDIA_TYPE_MONO | CAMERA_MEDIA_TYPE_OCCUPY1BIT | 0x0037) +CAMERA_MEDIA_TYPE_MONO2P = (CAMERA_MEDIA_TYPE_MONO | CAMERA_MEDIA_TYPE_OCCUPY2BIT | 0x0038) +CAMERA_MEDIA_TYPE_MONO4P = (CAMERA_MEDIA_TYPE_MONO | CAMERA_MEDIA_TYPE_OCCUPY4BIT | 0x0039) +CAMERA_MEDIA_TYPE_MONO8 = (CAMERA_MEDIA_TYPE_MONO | CAMERA_MEDIA_TYPE_OCCUPY8BIT | 0x0001) +CAMERA_MEDIA_TYPE_MONO8S = (CAMERA_MEDIA_TYPE_MONO | CAMERA_MEDIA_TYPE_OCCUPY8BIT | 0x0002) +CAMERA_MEDIA_TYPE_MONO10 = (CAMERA_MEDIA_TYPE_MONO | CAMERA_MEDIA_TYPE_OCCUPY16BIT | 0x0003) +CAMERA_MEDIA_TYPE_MONO10_PACKED = (CAMERA_MEDIA_TYPE_MONO | CAMERA_MEDIA_TYPE_OCCUPY12BIT | 0x0004) +CAMERA_MEDIA_TYPE_MONO12 = (CAMERA_MEDIA_TYPE_MONO | CAMERA_MEDIA_TYPE_OCCUPY16BIT | 0x0005) +CAMERA_MEDIA_TYPE_MONO12_PACKED = (CAMERA_MEDIA_TYPE_MONO | CAMERA_MEDIA_TYPE_OCCUPY12BIT | 0x0006) +CAMERA_MEDIA_TYPE_MONO14 = (CAMERA_MEDIA_TYPE_MONO | CAMERA_MEDIA_TYPE_OCCUPY16BIT | 0x0025) +CAMERA_MEDIA_TYPE_MONO16 = (CAMERA_MEDIA_TYPE_MONO | CAMERA_MEDIA_TYPE_OCCUPY16BIT | 0x0007) + +# Bayer +CAMERA_MEDIA_TYPE_BAYGR8 = (CAMERA_MEDIA_TYPE_MONO | CAMERA_MEDIA_TYPE_OCCUPY8BIT | 0x0008) +CAMERA_MEDIA_TYPE_BAYRG8 = (CAMERA_MEDIA_TYPE_MONO | CAMERA_MEDIA_TYPE_OCCUPY8BIT | 0x0009) +CAMERA_MEDIA_TYPE_BAYGB8 = (CAMERA_MEDIA_TYPE_MONO | CAMERA_MEDIA_TYPE_OCCUPY8BIT | 0x000A) +CAMERA_MEDIA_TYPE_BAYBG8 = (CAMERA_MEDIA_TYPE_MONO | CAMERA_MEDIA_TYPE_OCCUPY8BIT | 0x000B) + +CAMERA_MEDIA_TYPE_BAYGR10_MIPI = (CAMERA_MEDIA_TYPE_MONO | CAMERA_MEDIA_TYPE_OCCUPY10BIT | 0x0026) +CAMERA_MEDIA_TYPE_BAYRG10_MIPI = (CAMERA_MEDIA_TYPE_MONO | CAMERA_MEDIA_TYPE_OCCUPY10BIT | 0x0027) +CAMERA_MEDIA_TYPE_BAYGB10_MIPI = (CAMERA_MEDIA_TYPE_MONO | CAMERA_MEDIA_TYPE_OCCUPY10BIT | 0x0028) +CAMERA_MEDIA_TYPE_BAYBG10_MIPI = (CAMERA_MEDIA_TYPE_MONO | CAMERA_MEDIA_TYPE_OCCUPY10BIT | 0x0029) + +CAMERA_MEDIA_TYPE_BAYGR10 = (CAMERA_MEDIA_TYPE_MONO | CAMERA_MEDIA_TYPE_OCCUPY16BIT | 0x000C) +CAMERA_MEDIA_TYPE_BAYRG10 = (CAMERA_MEDIA_TYPE_MONO | CAMERA_MEDIA_TYPE_OCCUPY16BIT | 0x000D) +CAMERA_MEDIA_TYPE_BAYGB10 = (CAMERA_MEDIA_TYPE_MONO | CAMERA_MEDIA_TYPE_OCCUPY16BIT | 0x000E) +CAMERA_MEDIA_TYPE_BAYBG10 = (CAMERA_MEDIA_TYPE_MONO | CAMERA_MEDIA_TYPE_OCCUPY16BIT | 0x000F) + +CAMERA_MEDIA_TYPE_BAYGR12 = (CAMERA_MEDIA_TYPE_MONO | CAMERA_MEDIA_TYPE_OCCUPY16BIT | 0x0010) +CAMERA_MEDIA_TYPE_BAYRG12 = (CAMERA_MEDIA_TYPE_MONO | CAMERA_MEDIA_TYPE_OCCUPY16BIT | 0x0011) +CAMERA_MEDIA_TYPE_BAYGB12 = (CAMERA_MEDIA_TYPE_MONO | CAMERA_MEDIA_TYPE_OCCUPY16BIT | 0x0012) +CAMERA_MEDIA_TYPE_BAYBG12 = (CAMERA_MEDIA_TYPE_MONO | CAMERA_MEDIA_TYPE_OCCUPY16BIT | 0x0013) + +CAMERA_MEDIA_TYPE_BAYGR10_PACKED = (CAMERA_MEDIA_TYPE_MONO | CAMERA_MEDIA_TYPE_OCCUPY12BIT | 0x0026) +CAMERA_MEDIA_TYPE_BAYRG10_PACKED = (CAMERA_MEDIA_TYPE_MONO | CAMERA_MEDIA_TYPE_OCCUPY12BIT | 0x0027) +CAMERA_MEDIA_TYPE_BAYGB10_PACKED = (CAMERA_MEDIA_TYPE_MONO | CAMERA_MEDIA_TYPE_OCCUPY12BIT | 0x0028) +CAMERA_MEDIA_TYPE_BAYBG10_PACKED = (CAMERA_MEDIA_TYPE_MONO | CAMERA_MEDIA_TYPE_OCCUPY12BIT | 0x0029) + +CAMERA_MEDIA_TYPE_BAYGR12_PACKED = (CAMERA_MEDIA_TYPE_MONO | CAMERA_MEDIA_TYPE_OCCUPY12BIT | 0x002A) +CAMERA_MEDIA_TYPE_BAYRG12_PACKED = (CAMERA_MEDIA_TYPE_MONO | CAMERA_MEDIA_TYPE_OCCUPY12BIT | 0x002B) +CAMERA_MEDIA_TYPE_BAYGB12_PACKED = (CAMERA_MEDIA_TYPE_MONO | CAMERA_MEDIA_TYPE_OCCUPY12BIT | 0x002C) +CAMERA_MEDIA_TYPE_BAYBG12_PACKED = (CAMERA_MEDIA_TYPE_MONO | CAMERA_MEDIA_TYPE_OCCUPY12BIT | 0x002D) + +CAMERA_MEDIA_TYPE_BAYGR16 = (CAMERA_MEDIA_TYPE_MONO | CAMERA_MEDIA_TYPE_OCCUPY16BIT | 0x002E) +CAMERA_MEDIA_TYPE_BAYRG16 = (CAMERA_MEDIA_TYPE_MONO | CAMERA_MEDIA_TYPE_OCCUPY16BIT | 0x002F) +CAMERA_MEDIA_TYPE_BAYGB16 = (CAMERA_MEDIA_TYPE_MONO | CAMERA_MEDIA_TYPE_OCCUPY16BIT | 0x0030) +CAMERA_MEDIA_TYPE_BAYBG16 = (CAMERA_MEDIA_TYPE_MONO | CAMERA_MEDIA_TYPE_OCCUPY16BIT | 0x0031) + +# RGB +CAMERA_MEDIA_TYPE_RGB8 = (CAMERA_MEDIA_TYPE_COLOR | CAMERA_MEDIA_TYPE_OCCUPY24BIT | 0x0014) +CAMERA_MEDIA_TYPE_BGR8 = (CAMERA_MEDIA_TYPE_COLOR | CAMERA_MEDIA_TYPE_OCCUPY24BIT | 0x0015) +CAMERA_MEDIA_TYPE_RGBA8 = (CAMERA_MEDIA_TYPE_COLOR | CAMERA_MEDIA_TYPE_OCCUPY32BIT | 0x0016) +CAMERA_MEDIA_TYPE_BGRA8 = (CAMERA_MEDIA_TYPE_COLOR | CAMERA_MEDIA_TYPE_OCCUPY32BIT | 0x0017) +CAMERA_MEDIA_TYPE_RGB10 = (CAMERA_MEDIA_TYPE_COLOR | CAMERA_MEDIA_TYPE_OCCUPY48BIT | 0x0018) +CAMERA_MEDIA_TYPE_BGR10 = (CAMERA_MEDIA_TYPE_COLOR | CAMERA_MEDIA_TYPE_OCCUPY48BIT | 0x0019) +CAMERA_MEDIA_TYPE_RGB12 = (CAMERA_MEDIA_TYPE_COLOR | CAMERA_MEDIA_TYPE_OCCUPY48BIT | 0x001A) +CAMERA_MEDIA_TYPE_BGR12 = (CAMERA_MEDIA_TYPE_COLOR | CAMERA_MEDIA_TYPE_OCCUPY48BIT | 0x001B) +CAMERA_MEDIA_TYPE_RGB16 = (CAMERA_MEDIA_TYPE_COLOR | CAMERA_MEDIA_TYPE_OCCUPY48BIT | 0x0033) +CAMERA_MEDIA_TYPE_RGB10V1_PACKED = (CAMERA_MEDIA_TYPE_COLOR | CAMERA_MEDIA_TYPE_OCCUPY32BIT | 0x001C) +CAMERA_MEDIA_TYPE_RGB10P32 = (CAMERA_MEDIA_TYPE_COLOR | CAMERA_MEDIA_TYPE_OCCUPY32BIT | 0x001D) +CAMERA_MEDIA_TYPE_RGB12V1_PACKED = (CAMERA_MEDIA_TYPE_COLOR | CAMERA_MEDIA_TYPE_OCCUPY36BIT | 0X0034) +CAMERA_MEDIA_TYPE_RGB565P = (CAMERA_MEDIA_TYPE_COLOR | CAMERA_MEDIA_TYPE_OCCUPY16BIT | 0x0035) +CAMERA_MEDIA_TYPE_BGR565P = (CAMERA_MEDIA_TYPE_COLOR | CAMERA_MEDIA_TYPE_OCCUPY16BIT | 0X0036) + +# YUV and YCbCr +CAMERA_MEDIA_TYPE_YUV411_8_UYYVYY = (CAMERA_MEDIA_TYPE_COLOR | CAMERA_MEDIA_TYPE_OCCUPY12BIT | 0x001E) +CAMERA_MEDIA_TYPE_YUV422_8_UYVY = (CAMERA_MEDIA_TYPE_COLOR | CAMERA_MEDIA_TYPE_OCCUPY16BIT | 0x001F) +CAMERA_MEDIA_TYPE_YUV422_8 = (CAMERA_MEDIA_TYPE_COLOR | CAMERA_MEDIA_TYPE_OCCUPY16BIT | 0x0032) +CAMERA_MEDIA_TYPE_YUV8_UYV = (CAMERA_MEDIA_TYPE_COLOR | CAMERA_MEDIA_TYPE_OCCUPY24BIT | 0x0020) +CAMERA_MEDIA_TYPE_YCBCR8_CBYCR = (CAMERA_MEDIA_TYPE_COLOR | CAMERA_MEDIA_TYPE_OCCUPY24BIT | 0x003A) +#CAMERA_MEDIA_TYPE_YCBCR422_8 : YYYYCbCrCbCr +CAMERA_MEDIA_TYPE_YCBCR422_8 = (CAMERA_MEDIA_TYPE_COLOR | CAMERA_MEDIA_TYPE_OCCUPY16BIT | 0x003B) +CAMERA_MEDIA_TYPE_YCBCR422_8_CBYCRY = (CAMERA_MEDIA_TYPE_COLOR | CAMERA_MEDIA_TYPE_OCCUPY16BIT | 0x0043) +CAMERA_MEDIA_TYPE_YCBCR411_8_CBYYCRYY = (CAMERA_MEDIA_TYPE_COLOR | CAMERA_MEDIA_TYPE_OCCUPY12BIT | 0x003C) +CAMERA_MEDIA_TYPE_YCBCR601_8_CBYCR = (CAMERA_MEDIA_TYPE_COLOR | CAMERA_MEDIA_TYPE_OCCUPY24BIT | 0x003D) +CAMERA_MEDIA_TYPE_YCBCR601_422_8 = (CAMERA_MEDIA_TYPE_COLOR | CAMERA_MEDIA_TYPE_OCCUPY16BIT | 0x003E) +CAMERA_MEDIA_TYPE_YCBCR601_422_8_CBYCRY = (CAMERA_MEDIA_TYPE_COLOR | CAMERA_MEDIA_TYPE_OCCUPY16BIT | 0x0044) +CAMERA_MEDIA_TYPE_YCBCR601_411_8_CBYYCRYY = (CAMERA_MEDIA_TYPE_COLOR | CAMERA_MEDIA_TYPE_OCCUPY12BIT | 0x003F) +CAMERA_MEDIA_TYPE_YCBCR709_8_CBYCR = (CAMERA_MEDIA_TYPE_COLOR | CAMERA_MEDIA_TYPE_OCCUPY24BIT | 0x0040) +CAMERA_MEDIA_TYPE_YCBCR709_422_8 = (CAMERA_MEDIA_TYPE_COLOR | CAMERA_MEDIA_TYPE_OCCUPY16BIT | 0x0041) +CAMERA_MEDIA_TYPE_YCBCR709_422_8_CBYCRY = (CAMERA_MEDIA_TYPE_COLOR | CAMERA_MEDIA_TYPE_OCCUPY16BIT | 0x0045) +CAMERA_MEDIA_TYPE_YCBCR709_411_8_CBYYCRYY = (CAMERA_MEDIA_TYPE_COLOR | CAMERA_MEDIA_TYPE_OCCUPY12BIT | 0x0042) + +# RGB Planar +CAMERA_MEDIA_TYPE_RGB8_PLANAR = (CAMERA_MEDIA_TYPE_COLOR | CAMERA_MEDIA_TYPE_OCCUPY24BIT | 0x0021) +CAMERA_MEDIA_TYPE_RGB10_PLANAR = (CAMERA_MEDIA_TYPE_COLOR | CAMERA_MEDIA_TYPE_OCCUPY48BIT | 0x0022) +CAMERA_MEDIA_TYPE_RGB12_PLANAR = (CAMERA_MEDIA_TYPE_COLOR | CAMERA_MEDIA_TYPE_OCCUPY48BIT | 0x0023) +CAMERA_MEDIA_TYPE_RGB16_PLANAR = (CAMERA_MEDIA_TYPE_COLOR | CAMERA_MEDIA_TYPE_OCCUPY48BIT | 0x0024) + +# 保存格式 +FILE_JPG = 1 +FILE_BMP = 2 +FILE_RAW = 4 +FILE_PNG = 8 +FILE_BMP_8BIT = 16 +FILE_PNG_8BIT = 32 +FILE_RAW_16BIT = 64 + +# 触发信号 +EXT_TRIG_LEADING_EDGE = 0 +EXT_TRIG_TRAILING_EDGE = 1 +EXT_TRIG_HIGH_LEVEL = 2 +EXT_TRIG_LOW_LEVEL = 3 +EXT_TRIG_DOUBLE_EDGE = 4 + +# IO模式 +IOMODE_TRIG_INPUT = 0 +IOMODE_STROBE_OUTPUT = 1 +IOMODE_GP_INPUT = 2 +IOMODE_GP_OUTPUT = 3 +IOMODE_PWM_OUTPUT = 4 + + +# 相机操作异常信息 +class CameraException(Exception): + """docstring for CameraException""" + def __init__(self, error_code): + super(CameraException, self).__init__() + self.error_code = error_code + self.message = CameraGetErrorString(error_code) + + def __str__(self): + return 'error_code:{} message:{}'.format(self.error_code, self.message) + +class MvStructure(Structure): + def __str__(self): + strs = [] + for field in self._fields_: + name = field[0] + value = getattr(self, name) + if isinstance(value, type(b'')): + value = _string_buffer_to_str(value) + strs.append("{}:{}".format(name, value)) + return '\n'.join(strs) + + def __repr__(self): + return self.__str__() + + def clone(self): + obj = type(self)() + memmove(byref(obj), byref(self), sizeof(self)) + return obj + +# 相机的设备信息,只读信息,请勿修改 +class tSdkCameraDevInfo(MvStructure): + _fields_ = [("acProductSeries", c_char * 32), #产品系列 + ("acProductName", c_char * 32), #产品名称 + ("acFriendlyName", c_char * 32), #产品昵称 + ("acLinkName", c_char * 32), #内核符号连接名,内部使用 + ("acDriverVersion", c_char * 32), #驱动版本 + ("acSensorType", c_char * 32), #sensor类型 + ("acPortType", c_char * 32), #接口类型 + ("acSn", c_char * 32), #产品唯一序列号 + ("uInstance", c_uint)] #该型号相机在该电脑上的实例索引号,用于区分同型号多相机 + + def GetProductSeries(self): + return _string_buffer_to_str(self.acProductSeries) + def GetProductName(self): + return _string_buffer_to_str(self.acProductName) + def GetFriendlyName(self): + return _string_buffer_to_str(self.acFriendlyName) + def GetLinkName(self): + return _string_buffer_to_str(self.acLinkName) + def GetDriverVersion(self): + return _string_buffer_to_str(self.acDriverVersion) + def GetSensorType(self): + return _string_buffer_to_str(self.acSensorType) + def GetPortType(self): + return _string_buffer_to_str(self.acPortType) + def GetSn(self): + return _string_buffer_to_str(self.acSn) + +# 相机的分辨率设定范围 +class tSdkResolutionRange(MvStructure): + _fields_ = [("iHeightMax", c_int), #图像最大高度 + ("iHeightMin", c_int), #图像最小高度 + ("iWidthMax", c_int), #图像最大宽度 + ("iWidthMin", c_int), #图像最小宽度 + ("uSkipModeMask", c_uint), #SKIP模式掩码,为0,表示不支持SKIP 。bit0为1,表示支持SKIP 2x2 bit1为1,表示支持SKIP 3x3.... + ("uBinSumModeMask", c_uint), #BIN(求和)模式掩码,为0,表示不支持BIN 。bit0为1,表示支持BIN 2x2 bit1为1,表示支持BIN 3x3.... + ("uBinAverageModeMask", c_uint),#BIN(求均值)模式掩码,为0,表示不支持BIN 。bit0为1,表示支持BIN 2x2 bit1为1,表示支持BIN 3x3.... + ("uResampleMask", c_uint)] #硬件重采样的掩码 + +#相机的分辨率描述 +class tSdkImageResolution(MvStructure): + _fields_ = [ + ("iIndex", c_int), # 索引号,[0,N]表示预设的分辨率(N 为预设分辨率的最大个数,一般不超过20),OXFF 表示自定义分辨率(ROI) + ("acDescription", c_char * 32), # 该分辨率的描述信息。仅预设分辨率时该信息有效。自定义分辨率可忽略该信息 + ("uBinSumMode", c_uint), # BIN(求和)的模式,范围不能超过tSdkResolutionRange中uBinSumModeMask + ("uBinAverageMode", c_uint), # BIN(求均值)的模式,范围不能超过tSdkResolutionRange中uBinAverageModeMask + ("uSkipMode", c_uint), # 是否SKIP的尺寸,为0表示禁止SKIP模式,范围不能超过tSdkResolutionRange中uSkipModeMask + ("uResampleMask", c_uint), # 硬件重采样的掩码 + ("iHOffsetFOV", c_int), # 采集视场相对于Sensor最大视场左上角的垂直偏移 + ("iVOffsetFOV", c_int), # 采集视场相对于Sensor最大视场左上角的水平偏移 + ("iWidthFOV", c_int), # 采集视场的宽度 + ("iHeightFOV", c_int), # 采集视场的高度 + ("iWidth", c_int), # 相机最终输出的图像的宽度 + ("iHeight", c_int), # 相机最终输出的图像的高度 + ("iWidthZoomHd", c_int), # 硬件缩放的宽度,不需要进行此操作的分辨率,此变量设置为0. + ("iHeightZoomHd", c_int), # 硬件缩放的高度,不需要进行此操作的分辨率,此变量设置为0. + ("iWidthZoomSw", c_int), # 软件缩放的宽度,不需要进行此操作的分辨率,此变量设置为0. + ("iHeightZoomSw", c_int), # 软件缩放的高度,不需要进行此操作的分辨率,此变量设置为0. + ] + + def GetDescription(self): + return _string_buffer_to_str(self.acDescription) + +#相机白平衡模式描述信息 +class tSdkColorTemperatureDes(MvStructure): + _fields_ = [ + ("iIndex", c_int), # 模式索引号 + ("acDescription", c_char * 32), # 描述信息 + ] + + def GetDescription(self): + return _string_buffer_to_str(self.acDescription) + +#相机帧率描述信息 +class tSdkFrameSpeed(MvStructure): + _fields_ = [ + ("iIndex", c_int), # 帧率索引号,一般0对应于低速模式,1对应于普通模式,2对应于高速模式 + ("acDescription", c_char * 32), # 描述信息 + ] + + def GetDescription(self): + return _string_buffer_to_str(self.acDescription) + +#相机曝光功能范围定义 +class tSdkExpose(MvStructure): + _fields_ = [ + ("uiTargetMin", c_uint), #自动曝光亮度目标最小值 + ("uiTargetMax", c_uint), #自动曝光亮度目标最大值 + ("uiAnalogGainMin", c_uint), #模拟增益的最小值,单位为fAnalogGainStep中定义 + ("uiAnalogGainMax", c_uint), #模拟增益的最大值,单位为fAnalogGainStep中定义 + ("fAnalogGainStep", c_float), #模拟增益每增加1,对应的增加的放大倍数。例如,uiAnalogGainMin一般为16,fAnalogGainStep一般为0.125,那么最小放大倍数就是16*0.125 = 2倍 + ("uiExposeTimeMin", c_uint), #手动模式下,曝光时间的最小值,单位:行。根据CameraGetExposureLineTime可以获得一行对应的时间(微秒),从而得到整帧的曝光时间 + ("uiExposeTimeMax", c_uint), #手动模式下,曝光时间的最大值,单位:行 + ] + +#触发模式描述 +class tSdkTrigger(MvStructure): + _fields_ = [ + ("iIndex", c_int), # 模式索引号 + ("acDescription", c_char * 32), # 描述信息 + ] + + def GetDescription(self): + return _string_buffer_to_str(self.acDescription) + +#传输分包大小描述(主要是针对网络相机有效) +class tSdkPackLength(MvStructure): + _fields_ = [ + ("iIndex", c_int), # 模式索引号 + ("acDescription", c_char * 32), # 描述信息 + ("iPackSize", c_uint), + ] + + def GetDescription(self): + return _string_buffer_to_str(self.acDescription) + +#预设的LUT表描述 +class tSdkPresetLut(MvStructure): + _fields_ = [ + ("iIndex", c_int), # 编号 + ("acDescription", c_char * 32), # 描述信息 + ] + + def GetDescription(self): + return _string_buffer_to_str(self.acDescription) + +#AE算法描述 +class tSdkAeAlgorithm(MvStructure): + _fields_ = [ + ("iIndex", c_int), # 编号 + ("acDescription", c_char * 32), # 描述信息 + ] + + def GetDescription(self): + return _string_buffer_to_str(self.acDescription) + +#RAW转RGB算法描述 +class tSdkBayerDecodeAlgorithm(MvStructure): + _fields_ = [ + ("iIndex", c_int), # 编号 + ("acDescription", c_char * 32), # 描述信息 + ] + + def GetDescription(self): + return _string_buffer_to_str(self.acDescription) + +#帧率统计信息 +class tSdkFrameStatistic(MvStructure): + _fields_ = [ + ("iTotal", c_int), #当前采集的总帧数(包括错误帧) + ("iCapture", c_int), #当前采集的有效帧的数量 + ("iLost", c_int), #当前丢帧的数量 + ] + +#相机输出的图像数据格式 +class tSdkMediaType(MvStructure): + _fields_ = [ + ("iIndex", c_int), # 格式种类编号 + ("acDescription", c_char * 32), # 描述信息 + ("iMediaType", c_uint), # 对应的图像格式编码,如CAMERA_MEDIA_TYPE_BAYGR8。 + ] + + def GetDescription(self): + return _string_buffer_to_str(self.acDescription) + +#伽马的设定范围 +class tGammaRange(MvStructure): + _fields_ = [ + ("iMin", c_int), #最小值 + ("iMax", c_int), #最大值 + ] + +#对比度的设定范围 +class tContrastRange(MvStructure): + _fields_ = [ + ("iMin", c_int), #最小值 + ("iMax", c_int), #最大值 + ] + +#RGB三通道数字增益的设定范围 +class tRgbGainRange(MvStructure): + _fields_ = [ + ("iRGainMin", c_int), #红色增益的最小值 + ("iRGainMax", c_int), #红色增益的最大值 + ("iGGainMin", c_int), #绿色增益的最小值 + ("iGGainMax", c_int), #绿色增益的最大值 + ("iBGainMin", c_int), #蓝色增益的最小值 + ("iBGainMax", c_int), #蓝色增益的最大值 + ] + +#饱和度设定的范围 +class tSaturationRange(MvStructure): + _fields_ = [ + ("iMin", c_int), #最小值 + ("iMax", c_int), #最大值 + ] + +#锐化的设定范围 +class tSharpnessRange(MvStructure): + _fields_ = [ + ("iMin", c_int), #最小值 + ("iMax", c_int), #最大值 + ] + +#ISP模块的使能信息 +class tSdkIspCapacity(MvStructure): + _fields_ = [ + ("bMonoSensor", c_int), #表示该型号相机是否为黑白相机,如果是黑白相机,则颜色相关的功能都无法调节 + ("bWbOnce", c_int), #表示该型号相机是否支持手动白平衡功能 + ("bAutoWb", c_int), #表示该型号相机是否支持自动白平衡功能 + ("bAutoExposure", c_int), #表示该型号相机是否支持自动曝光功能 + ("bManualExposure", c_int), #表示该型号相机是否支持手动曝光功能 + ("bAntiFlick", c_int), #表示该型号相机是否支持抗频闪功能 + ("bDeviceIsp", c_int), #表示该型号相机是否支持硬件ISP功能 + ("bForceUseDeviceIsp", c_int), #bDeviceIsp和bForceUseDeviceIsp同时为TRUE时,表示强制只用硬件ISP,不可取消。 + ("bZoomHD", c_int), #相机硬件是否支持图像缩放输出(只能是缩小)。 + ] + +# 定义整合的设备描述信息,这些信息可以用于动态构建UI +class tSdkCameraCapbility(MvStructure): + _fields_ = [ + ("pTriggerDesc", POINTER(tSdkTrigger)), + ("iTriggerDesc", c_int), #触发模式的个数,即pTriggerDesc数组的大小 + ("pImageSizeDesc", POINTER(tSdkImageResolution)), + ("iImageSizeDesc", c_int), #预设分辨率的个数,即pImageSizeDesc数组的大小 + ("pClrTempDesc", POINTER(tSdkColorTemperatureDes)), + ("iClrTempDesc", c_int), #预设色温个数 + ("pMediaTypeDesc", POINTER(tSdkMediaType)), + ("iMediaTypeDesc", c_int), #相机输出图像格式的种类个数,即pMediaTypeDesc数组的大小。 + ("pFrameSpeedDesc", POINTER(tSdkFrameSpeed)), #可调节帧速类型,对应界面上普通 高速 和超级三种速度设置 + ("iFrameSpeedDesc", c_int), #可调节帧速类型的个数,即pFrameSpeedDesc数组的大小。 + ("pPackLenDesc", POINTER(tSdkPackLength)), #传输包长度,一般用于网络设备 + ("iPackLenDesc", c_int), #可供选择的传输分包长度的个数,即pPackLenDesc数组的大小。 + ("iOutputIoCounts", c_int), #可编程输出IO的个数 + ("iInputIoCounts", c_int), #可编程输入IO的个数 + ("pPresetLutDesc", POINTER(tSdkPresetLut)), #相机预设的LUT表 + ("iPresetLut", c_int), #相机预设的LUT表的个数,即pPresetLutDesc数组的大小 + ("iUserDataMaxLen", c_int), #指示该相机中用于保存用户数据区的最大长度。为0表示无。 + ("bParamInDevice", c_int), #指示该设备是否支持从设备中读写参数组。1为支持,0不支持。 + ("pAeAlmSwDesc", POINTER(tSdkAeAlgorithm)),#软件自动曝光算法描述 + ("iAeAlmSwDesc", c_int), #软件自动曝光算法个数 + ("pAeAlmHdDesc", POINTER(tSdkAeAlgorithm)),#硬件自动曝光算法描述,为NULL表示不支持硬件自动曝光 + ("iAeAlmHdDesc", c_int), #硬件自动曝光算法个数,为0表示不支持硬件自动曝光 + ("pBayerDecAlmSwDesc", POINTER(tSdkBayerDecodeAlgorithm)),#软件Bayer转换为RGB数据的算法描述 + ("iBayerDecAlmSwDesc", c_int), #软件Bayer转换为RGB数据的算法个数 + ("pBayerDecAlmHdDesc", POINTER(tSdkBayerDecodeAlgorithm)),#硬件Bayer转换为RGB数据的算法描述,为NULL表示不支持 + ("iBayerDecAlmHdDesc", c_int), #硬件Bayer转换为RGB数据的算法个数,为0表示不支持 + + # 图像参数的调节范围定义,用于动态构建UI + ("sExposeDesc", tSdkExpose), #曝光的范围值 + ("sResolutionRange", tSdkResolutionRange), #分辨率范围描述 + ("sRgbGainRange", tRgbGainRange), #图像数字增益范围描述 + ("sSaturationRange", tSaturationRange), #饱和度范围描述 + ("sGammaRange", tGammaRange), #伽马范围描述 + ("sContrastRange", tContrastRange), #对比度范围描述 + ("sSharpnessRange", tSharpnessRange), #锐化范围描述 + ("sIspCapacity", tSdkIspCapacity), #ISP能力描述 + ] + +#图像帧头信息 +class tSdkFrameHead(MvStructure): + _fields_ = [ + ("uiMediaType", c_uint), # 图像格式,Image Format + ("uBytes", c_uint), # 图像数据字节数,Total bytes + ("iWidth", c_int), # 宽度 Image height + ("iHeight", c_int), # 高度 Image width + ("iWidthZoomSw", c_int), # 软件缩放的宽度,不需要进行软件裁剪的图像,此变量设置为0. + ("iHeightZoomSw", c_int), # 软件缩放的高度,不需要进行软件裁剪的图像,此变量设置为0. + ("bIsTrigger", c_int), # 指示是否为触发帧 is trigger + ("uiTimeStamp", c_uint), # 该帧的采集时间,单位0.1毫秒 + ("uiExpTime", c_uint), # 当前图像的曝光值,单位为微秒us + ("fAnalogGain", c_float), # 当前图像的模拟增益倍数 + ("iGamma", c_int), # 该帧图像的伽马设定值,仅当LUT模式为动态参数生成时有效,其余模式下为-1 + ("iContrast", c_int), # 该帧图像的对比度设定值,仅当LUT模式为动态参数生成时有效,其余模式下为-1 + ("iSaturation", c_int), # 该帧图像的饱和度设定值,对于黑白相机无意义,为0 + ("fRgain", c_float), # 该帧图像处理的红色数字增益倍数,对于黑白相机无意义,为1 + ("fGgain", c_float), # 该帧图像处理的绿色数字增益倍数,对于黑白相机无意义,为1 + ("fBgain", c_float), # 该帧图像处理的蓝色数字增益倍数,对于黑白相机无意义,为1 + ] + +# Grabber统计信息 +class tSdkGrabberStat(MvStructure): + _fields_ = [ + ("Width", c_int), # 帧图像大小 + ("Height", c_int), # 帧图像大小 + ("Disp", c_int), # 显示帧数量 + ("Capture", c_int), # 采集的有效帧的数量 + ("Lost", c_int), # 丢帧的数量 + ("Error", c_int), # 错帧的数量 + ("DispFps", c_float), # 显示帧率 + ("CapFps", c_float), # 捕获帧率 + ] + +# 方法回调辅助类 +class method(object): + def __init__(self, FuncType): + super(method, self).__init__() + self.FuncType = FuncType + self.cache = {} + + def __call__(self, cb): + self.cb = cb + return self + + def __get__(self, obj, objtype): + try: + return self.cache[obj] + except KeyError as e: + def cl(*args): + return self.cb(obj, *args) + r = self.cache[obj] = self.FuncType(cl) + return r + +# 图像捕获的回调函数定义 +CAMERA_SNAP_PROC = CALLBACK_FUNC_TYPE(None, c_int, c_void_p, POINTER(tSdkFrameHead), c_void_p) + +# 相机连接状态回调 +CAMERA_CONNECTION_STATUS_CALLBACK = CALLBACK_FUNC_TYPE(None, c_int, c_uint, c_uint, c_void_p) + +# 异步抓图完成回调 +pfnCameraGrabberSaveImageComplete = CALLBACK_FUNC_TYPE(None, c_void_p, c_void_p, c_int, c_void_p) + +# 帧监听回调 +pfnCameraGrabberFrameListener = CALLBACK_FUNC_TYPE(c_int, c_void_p, c_int, c_void_p, POINTER(tSdkFrameHead), c_void_p) + +# 采集器图像捕获的回调 +pfnCameraGrabberFrameCallback = CALLBACK_FUNC_TYPE(None, c_void_p, c_void_p, POINTER(tSdkFrameHead), c_void_p) + +#-----------------------------------函数接口------------------------------------------ + +# 线程局部存储 +_tls = local() + +# 存储最后一次SDK调用返回的错误码 +def GetLastError(): + try: + return _tls.last_error + except AttributeError as e: + _tls.last_error = 0 + return 0 + +def SetLastError(err_code): + _tls.last_error = err_code + +def _string_buffer_to_str(buf): + s = buf if isinstance(buf, type(b'')) else buf.value + + for codec in ('gbk', 'utf-8'): + try: + s = s.decode(codec) + break + except UnicodeDecodeError as e: + continue + + if isinstance(s, str): + return s + else: + return s.encode('utf-8') + +def _str_to_string_buffer(str): + if type(str) is type(u''): + s = str.encode('gbk') + else: + s = str.decode('utf-8').encode('gbk') + return create_string_buffer(s) + +def CameraSdkInit(iLanguageSel): + err_code = _sdk.CameraSdkInit(iLanguageSel) + SetLastError(err_code) + return err_code + +def CameraSetSysOption(optionName, value): + err_code = _sdk.CameraSetSysOption(_str_to_string_buffer(optionName), _str_to_string_buffer(str(value))) + SetLastError(err_code) + return err_code + +def CameraEnumerateDevice(MaxCount = 32): + Nums = c_int(MaxCount) + pCameraList = (tSdkCameraDevInfo * Nums.value)() + err_code = _sdk.CameraEnumerateDevice(pCameraList, byref(Nums)) + SetLastError(err_code) + return pCameraList[0:Nums.value] + +def CameraEnumerateDeviceEx(): + return _sdk.CameraEnumerateDeviceEx() + +def CameraIsOpened(pCameraInfo): + pOpened = c_int() + err_code = _sdk.CameraIsOpened(byref(pCameraInfo), byref(pOpened) ) + SetLastError(err_code) + return pOpened.value != 0 + +def CameraInit(pCameraInfo, emParamLoadMode = -1, emTeam = -1): + pCameraHandle = c_int() + err_code = _sdk.CameraInit(byref(pCameraInfo), emParamLoadMode, emTeam, byref(pCameraHandle)) + SetLastError(err_code) + if err_code != 0: + raise CameraException(err_code) + return pCameraHandle.value + +def CameraInitEx(iDeviceIndex, emParamLoadMode = -1, emTeam = -1): + pCameraHandle = c_int() + err_code = _sdk.CameraInitEx(iDeviceIndex, emParamLoadMode, emTeam, byref(pCameraHandle)) + SetLastError(err_code) + if err_code != 0: + raise CameraException(err_code) + return pCameraHandle.value + +def CameraInitEx2(CameraName): + pCameraHandle = c_int() + err_code = _sdk.CameraInitEx2(_str_to_string_buffer(CameraName), byref(pCameraHandle)) + SetLastError(err_code) + if err_code != 0: + raise CameraException(err_code) + return pCameraHandle.value + +def CameraSetCallbackFunction(hCamera, pCallBack, pContext = 0): + err_code = _sdk.CameraSetCallbackFunction(hCamera, pCallBack, c_void_p(pContext), None) + SetLastError(err_code) + return err_code + +def CameraUnInit(hCamera): + err_code = _sdk.CameraUnInit(hCamera) + SetLastError(err_code) + return err_code + +def CameraGetInformation(hCamera): + pbuffer = c_char_p() + err_code = _sdk.CameraGetInformation(hCamera, byref(pbuffer) ) + SetLastError(err_code) + if err_code == 0 and pbuffer.value is not None: + return _string_buffer_to_str(pbuffer) + return '' + +def CameraImageProcess(hCamera, pbyIn, pbyOut, pFrInfo): + err_code = _sdk.CameraImageProcess(hCamera, c_void_p(pbyIn), c_void_p(pbyOut), byref(pFrInfo)) + SetLastError(err_code) + return err_code + +def CameraImageProcessEx(hCamera, pbyIn, pbyOut, pFrInfo, uOutFormat, uReserved): + err_code = _sdk.CameraImageProcessEx(hCamera, c_void_p(pbyIn), c_void_p(pbyOut), byref(pFrInfo), uOutFormat, uReserved) + SetLastError(err_code) + return err_code + +def CameraDisplayInit(hCamera, hWndDisplay): + err_code = _sdk.CameraDisplayInit(hCamera, hWndDisplay) + SetLastError(err_code) + return err_code + +def CameraDisplayRGB24(hCamera, pFrameBuffer, pFrInfo): + err_code = _sdk.CameraDisplayRGB24(hCamera, c_void_p(pFrameBuffer), byref(pFrInfo) ) + SetLastError(err_code) + return err_code + +def CameraSetDisplayMode(hCamera, iMode): + err_code = _sdk.CameraSetDisplayMode(hCamera, iMode) + SetLastError(err_code) + return err_code + +def CameraSetDisplayOffset(hCamera, iOffsetX, iOffsetY): + err_code = _sdk.CameraSetDisplayOffset(hCamera, iOffsetX, iOffsetY) + SetLastError(err_code) + return err_code + +def CameraSetDisplaySize(hCamera, iWidth, iHeight): + err_code = _sdk.CameraSetDisplaySize(hCamera, iWidth, iHeight) + SetLastError(err_code) + return err_code + +def CameraGetImageBuffer(hCamera, wTimes): + pbyBuffer = c_void_p() + pFrameInfo = tSdkFrameHead() + err_code = _sdk.CameraGetImageBuffer(hCamera, byref(pFrameInfo), byref(pbyBuffer), wTimes) + SetLastError(err_code) + if err_code != 0: + raise CameraException(err_code) + return (pbyBuffer.value, pFrameInfo) + +def CameraGetImageBufferEx(hCamera, wTimes): + _sdk.CameraGetImageBufferEx.restype = c_void_p + piWidth = c_int() + piHeight = c_int() + pFrameBuffer = _sdk.CameraGetImageBufferEx(hCamera, byref(piWidth), byref(piHeight), wTimes) + err_code = CAMERA_STATUS_SUCCESS if pFrameBuffer else CAMERA_STATUS_TIME_OUT + SetLastError(err_code) + if pFrameBuffer: + return (pFrameBuffer, piWidth.value, piHeight.value) + else: + raise CameraException(err_code) + +def CameraSnapToBuffer(hCamera, wTimes): + pbyBuffer = c_void_p() + pFrameInfo = tSdkFrameHead() + err_code = _sdk.CameraSnapToBuffer(hCamera, byref(pFrameInfo), byref(pbyBuffer), wTimes) + SetLastError(err_code) + if err_code != 0: + raise CameraException(err_code) + return (pbyBuffer.value, pFrameInfo) + +def CameraReleaseImageBuffer(hCamera, pbyBuffer): + err_code = _sdk.CameraReleaseImageBuffer(hCamera, c_void_p(pbyBuffer) ) + SetLastError(err_code) + return err_code + +def CameraPlay(hCamera): + err_code = _sdk.CameraPlay(hCamera) + SetLastError(err_code) + return err_code + +def CameraPause(hCamera): + err_code = _sdk.CameraPause(hCamera) + SetLastError(err_code) + return err_code + +def CameraStop(hCamera): + err_code = _sdk.CameraStop(hCamera) + SetLastError(err_code) + return err_code + +def CameraInitRecord(hCamera, iFormat, pcSavePath, b2GLimit, dwQuality, iFrameRate): + err_code = _sdk.CameraInitRecord(hCamera, iFormat, _str_to_string_buffer(pcSavePath), b2GLimit, dwQuality, iFrameRate) + SetLastError(err_code) + return err_code + +def CameraStopRecord(hCamera): + err_code = _sdk.CameraStopRecord(hCamera) + SetLastError(err_code) + return err_code + +def CameraPushFrame(hCamera, pbyImageBuffer, pFrInfo): + err_code = _sdk.CameraPushFrame(hCamera, c_void_p(pbyImageBuffer), byref(pFrInfo) ) + SetLastError(err_code) + return err_code + +def CameraSaveImage(hCamera, lpszFileName, pbyImageBuffer, pFrInfo, byFileType, byQuality): + err_code = _sdk.CameraSaveImage(hCamera, _str_to_string_buffer(lpszFileName), c_void_p(pbyImageBuffer), byref(pFrInfo), byFileType, byQuality) + SetLastError(err_code) + return err_code + +def CameraSaveImageEx(hCamera, lpszFileName, pbyImageBuffer, uImageFormat, iWidth, iHeight, byFileType, byQuality): + err_code = _sdk.CameraSaveImageEx(hCamera, _str_to_string_buffer(lpszFileName), c_void_p(pbyImageBuffer), uImageFormat, iWidth, iHeight, byFileType, byQuality) + SetLastError(err_code) + return err_code + +def CameraGetImageResolution(hCamera): + psCurVideoSize = tSdkImageResolution() + err_code = _sdk.CameraGetImageResolution(hCamera, byref(psCurVideoSize) ) + SetLastError(err_code) + return psCurVideoSize + +def CameraSetImageResolution(hCamera, pImageResolution): + err_code = _sdk.CameraSetImageResolution(hCamera, byref(pImageResolution) ) + SetLastError(err_code) + return err_code + +def CameraSetImageResolutionEx(hCamera, iIndex, Mode, ModeSize, x, y, width, height, ZoomWidth, ZoomHeight): + err_code = _sdk.CameraSetImageResolutionEx(hCamera, iIndex, Mode, ModeSize, x, y, width, height, ZoomWidth, ZoomHeight) + SetLastError(err_code) + return err_code + +def CameraGetMediaType(hCamera): + piMediaType = c_int() + err_code = _sdk.CameraGetMediaType(hCamera, byref(piMediaType) ) + SetLastError(err_code) + return piMediaType.value + +def CameraSetMediaType(hCamera, iMediaType): + err_code = _sdk.CameraSetMediaType(hCamera, iMediaType) + SetLastError(err_code) + return err_code + +def CameraSetAeState(hCamera, bAeState): + err_code = _sdk.CameraSetAeState(hCamera, bAeState) + SetLastError(err_code) + return err_code + +def CameraGetAeState(hCamera): + pAeState = c_int() + err_code = _sdk.CameraGetAeState(hCamera, byref(pAeState) ) + SetLastError(err_code) + return pAeState.value + +def CameraSetSharpness(hCamera, iSharpness): + err_code = _sdk.CameraSetSharpness(hCamera, iSharpness) + SetLastError(err_code) + return err_code + +def CameraGetSharpness(hCamera): + piSharpness = c_int() + err_code = _sdk.CameraGetSharpness(hCamera, byref(piSharpness) ) + SetLastError(err_code) + return piSharpness.value + +def CameraSetLutMode(hCamera, emLutMode): + err_code = _sdk.CameraSetLutMode(hCamera, emLutMode) + SetLastError(err_code) + return err_code + +def CameraGetLutMode(hCamera): + pemLutMode = c_int() + err_code = _sdk.CameraGetLutMode(hCamera, byref(pemLutMode) ) + SetLastError(err_code) + return pemLutMode.value + +def CameraSelectLutPreset(hCamera, iSel): + err_code = _sdk.CameraSelectLutPreset(hCamera, iSel) + SetLastError(err_code) + return err_code + +def CameraGetLutPresetSel(hCamera): + piSel = c_int() + err_code = _sdk.CameraGetLutPresetSel(hCamera, byref(piSel) ) + SetLastError(err_code) + return piSel.value + +def CameraSetCustomLut(hCamera, iChannel, pLut): + pLutNative = (c_ushort * 4096)(*pLut) + err_code = _sdk.CameraSetCustomLut(hCamera, iChannel, pLutNative) + SetLastError(err_code) + return err_code + +def CameraGetCustomLut(hCamera, iChannel): + pLutNative = (c_ushort * 4096)() + err_code = _sdk.CameraGetCustomLut(hCamera, iChannel, pLutNative) + SetLastError(err_code) + return pLutNative[:] + +def CameraGetCurrentLut(hCamera, iChannel): + pLutNative = (c_ushort * 4096)() + err_code = _sdk.CameraGetCurrentLut(hCamera, iChannel, pLutNative) + SetLastError(err_code) + return pLutNative[:] + +def CameraSetWbMode(hCamera, bAuto): + err_code = _sdk.CameraSetWbMode(hCamera, bAuto) + SetLastError(err_code) + return err_code + +def CameraGetWbMode(hCamera): + pbAuto = c_int() + err_code = _sdk.CameraGetWbMode(hCamera, byref(pbAuto) ) + SetLastError(err_code) + return pbAuto.value + +def CameraSetPresetClrTemp(hCamera, iSel): + err_code = _sdk.CameraSetPresetClrTemp(hCamera, iSel) + SetLastError(err_code) + return err_code + +def CameraGetPresetClrTemp(hCamera): + piSel = c_int() + err_code = _sdk.CameraGetPresetClrTemp(hCamera, byref(piSel) ) + SetLastError(err_code) + return piSel.value + +def CameraSetUserClrTempGain(hCamera, iRgain, iGgain, iBgain): + err_code = _sdk.CameraSetUserClrTempGain(hCamera, iRgain, iGgain, iBgain) + SetLastError(err_code) + return err_code + +def CameraGetUserClrTempGain(hCamera): + piRgain = c_int() + piGgain = c_int() + piBgain = c_int() + err_code = _sdk.CameraGetUserClrTempGain(hCamera, byref(piRgain), byref(piGgain), byref(piBgain) ) + SetLastError(err_code) + return (piRgain.value, piGgain.value, piBgain.value) + +def CameraSetUserClrTempMatrix(hCamera, pMatrix): + pMatrixNative = (c_float * 9)(*pMatrix) + err_code = _sdk.CameraSetUserClrTempMatrix(hCamera, pMatrixNative) + SetLastError(err_code) + return err_code + +def CameraGetUserClrTempMatrix(hCamera): + pMatrixNative = (c_float * 9)() + err_code = _sdk.CameraGetUserClrTempMatrix(hCamera, pMatrixNative) + SetLastError(err_code) + return pMatrixNative[:] + +def CameraSetClrTempMode(hCamera, iMode): + err_code = _sdk.CameraSetClrTempMode(hCamera, iMode) + SetLastError(err_code) + return err_code + +def CameraGetClrTempMode(hCamera): + piMode = c_int() + err_code = _sdk.CameraGetClrTempMode(hCamera, byref(piMode) ) + SetLastError(err_code) + return piMode.value + +def CameraSetOnceWB(hCamera): + err_code = _sdk.CameraSetOnceWB(hCamera) + SetLastError(err_code) + return err_code + +def CameraSetOnceBB(hCamera): + err_code = _sdk.CameraSetOnceBB(hCamera) + SetLastError(err_code) + return err_code + +def CameraSetAeTarget(hCamera, iAeTarget): + err_code = _sdk.CameraSetAeTarget(hCamera, iAeTarget) + SetLastError(err_code) + return err_code + +def CameraGetAeTarget(hCamera): + piAeTarget = c_int() + err_code = _sdk.CameraGetAeTarget(hCamera, byref(piAeTarget) ) + SetLastError(err_code) + return piAeTarget.value + +def CameraSetAeExposureRange(hCamera, fMinExposureTime, fMaxExposureTime): + err_code = _sdk.CameraSetAeExposureRange(hCamera, c_double(fMinExposureTime), c_double(fMaxExposureTime) ) + SetLastError(err_code) + return err_code + +def CameraGetAeExposureRange(hCamera): + fMinExposureTime = c_double() + fMaxExposureTime = c_double() + err_code = _sdk.CameraGetAeExposureRange(hCamera, byref(fMinExposureTime), byref(fMaxExposureTime) ) + SetLastError(err_code) + return (fMinExposureTime.value, fMaxExposureTime.value) + +def CameraSetAeAnalogGainRange(hCamera, iMinAnalogGain, iMaxAnalogGain): + err_code = _sdk.CameraSetAeAnalogGainRange(hCamera, iMinAnalogGain, iMaxAnalogGain) + SetLastError(err_code) + return err_code + +def CameraGetAeAnalogGainRange(hCamera): + iMinAnalogGain = c_int() + iMaxAnalogGain = c_int() + err_code = _sdk.CameraGetAeAnalogGainRange(hCamera, byref(iMinAnalogGain), byref(iMaxAnalogGain) ) + SetLastError(err_code) + return (iMinAnalogGain.value, iMaxAnalogGain.value) + +def CameraSetAeThreshold(hCamera, iThreshold): + err_code = _sdk.CameraSetAeThreshold(hCamera, iThreshold) + SetLastError(err_code) + return err_code + +def CameraGetAeThreshold(hCamera): + iThreshold = c_int() + err_code = _sdk.CameraGetAeThreshold(hCamera, byref(iThreshold)) + SetLastError(err_code) + return iThreshold.value + +def CameraSetExposureTime(hCamera, fExposureTime): + err_code = _sdk.CameraSetExposureTime(hCamera, c_double(fExposureTime) ) + SetLastError(err_code) + return err_code + +def CameraGetExposureLineTime(hCamera): + pfLineTime = c_double() + err_code = _sdk.CameraGetExposureLineTime(hCamera, byref(pfLineTime)) + SetLastError(err_code) + return pfLineTime.value + +def CameraGetExposureTime(hCamera): + pfExposureTime = c_double() + err_code = _sdk.CameraGetExposureTime(hCamera, byref(pfExposureTime)) + SetLastError(err_code) + return pfExposureTime.value + +def CameraGetExposureTimeRange(hCamera): + pfMin = c_double() + pfMax = c_double() + pfStep = c_double() + err_code = _sdk.CameraGetExposureTimeRange(hCamera, byref(pfMin), byref(pfMax), byref(pfStep)) + SetLastError(err_code) + return (pfMin.value, pfMax.value, pfStep.value) + +def CameraSetAnalogGain(hCamera, iAnalogGain): + err_code = _sdk.CameraSetAnalogGain(hCamera, iAnalogGain) + SetLastError(err_code) + return err_code + +def CameraGetAnalogGain(hCamera): + piAnalogGain = c_int() + err_code = _sdk.CameraGetAnalogGain(hCamera, byref(piAnalogGain)) + SetLastError(err_code) + return piAnalogGain.value + +def CameraSetAnalogGainX(hCamera, fGain): + err_code = _sdk.CameraSetAnalogGainX(hCamera, c_float(fGain) ) + SetLastError(err_code) + return err_code + +def CameraGetAnalogGainX(hCamera): + fGain = c_float() + err_code = _sdk.CameraGetAnalogGainX(hCamera, byref(fGain)) + SetLastError(err_code) + return fGain.value + +def CameraGetAnalogGainXRange(hCamera): + pfMin = c_float() + pfMax = c_float() + pfStep = c_float() + err_code = _sdk.CameraGetAnalogGainXRange(hCamera, byref(pfMin), byref(pfMax), byref(pfStep)) + SetLastError(err_code) + return (pfMin.value, pfMax.value, pfStep.value) + +def CameraSetGain(hCamera, iRGain, iGGain, iBGain): + err_code = _sdk.CameraSetGain(hCamera, iRGain, iGGain, iBGain) + SetLastError(err_code) + return err_code + +def CameraGetGain(hCamera): + piRGain = c_int() + piGGain = c_int() + piBGain = c_int() + err_code = _sdk.CameraGetGain(hCamera, byref(piRGain), byref(piGGain), byref(piBGain)) + SetLastError(err_code) + return (piRGain.value, piGGain.value, piBGain.value) + +def CameraSetGamma(hCamera, iGamma): + err_code = _sdk.CameraSetGamma(hCamera, iGamma) + SetLastError(err_code) + return err_code + +def CameraGetGamma(hCamera): + piGamma = c_int() + err_code = _sdk.CameraGetGamma(hCamera, byref(piGamma)) + SetLastError(err_code) + return piGamma.value + +def CameraSetContrast(hCamera, iContrast): + err_code = _sdk.CameraSetContrast(hCamera, iContrast) + SetLastError(err_code) + return err_code + +def CameraGetContrast(hCamera): + piContrast = c_int() + err_code = _sdk.CameraGetContrast(hCamera, byref(piContrast)) + SetLastError(err_code) + return piContrast.value + +def CameraSetSaturation(hCamera, iSaturation): + err_code = _sdk.CameraSetSaturation(hCamera, iSaturation) + SetLastError(err_code) + return err_code + +def CameraGetSaturation(hCamera): + piSaturation = c_int() + err_code = _sdk.CameraGetSaturation(hCamera, byref(piSaturation)) + SetLastError(err_code) + return piSaturation.value + +def CameraSetMonochrome(hCamera, bEnable): + err_code = _sdk.CameraSetMonochrome(hCamera, bEnable) + SetLastError(err_code) + return err_code + +def CameraGetMonochrome(hCamera): + pbEnable = c_int() + err_code = _sdk.CameraGetMonochrome(hCamera, byref(pbEnable)) + SetLastError(err_code) + return pbEnable.value + +def CameraSetInverse(hCamera, bEnable): + err_code = _sdk.CameraSetInverse(hCamera, bEnable) + SetLastError(err_code) + return err_code + +def CameraGetInverse(hCamera): + pbEnable = c_int() + err_code = _sdk.CameraGetInverse(hCamera, byref(pbEnable)) + SetLastError(err_code) + return pbEnable.value + +def CameraSetAntiFlick(hCamera, bEnable): + err_code = _sdk.CameraSetAntiFlick(hCamera, bEnable) + SetLastError(err_code) + return err_code + +def CameraGetAntiFlick(hCamera): + pbEnable = c_int() + err_code = _sdk.CameraGetAntiFlick(hCamera, byref(pbEnable)) + SetLastError(err_code) + return pbEnable.value + +def CameraGetLightFrequency(hCamera): + piFrequencySel = c_int() + err_code = _sdk.CameraGetLightFrequency(hCamera, byref(piFrequencySel)) + SetLastError(err_code) + return piFrequencySel.value + +def CameraSetLightFrequency(hCamera, iFrequencySel): + err_code = _sdk.CameraSetLightFrequency(hCamera, iFrequencySel) + SetLastError(err_code) + return err_code + +def CameraSetFrameSpeed(hCamera, iFrameSpeed): + err_code = _sdk.CameraSetFrameSpeed(hCamera, iFrameSpeed) + SetLastError(err_code) + return err_code + +def CameraGetFrameSpeed(hCamera): + piFrameSpeed = c_int() + err_code = _sdk.CameraGetFrameSpeed(hCamera, byref(piFrameSpeed)) + SetLastError(err_code) + return piFrameSpeed.value + +def CameraSetParameterMode(hCamera, iMode): + err_code = _sdk.CameraSetParameterMode(hCamera, iMode) + SetLastError(err_code) + return err_code + +def CameraGetParameterMode(hCamera): + piTarget = c_int() + err_code = _sdk.CameraGetParameterMode(hCamera, byref(piTarget)) + SetLastError(err_code) + return piTarget.value + +def CameraSetParameterMask(hCamera, uMask): + err_code = _sdk.CameraSetParameterMask(hCamera, uMask) + SetLastError(err_code) + return err_code + +def CameraSaveParameter(hCamera, iTeam): + err_code = _sdk.CameraSaveParameter(hCamera, iTeam) + SetLastError(err_code) + return err_code + +def CameraSaveParameterToFile(hCamera, sFileName): + err_code = _sdk.CameraSaveParameterToFile(hCamera, _str_to_string_buffer(sFileName)) + SetLastError(err_code) + return err_code + +def CameraReadParameterFromFile(hCamera, sFileName): + err_code = _sdk.CameraReadParameterFromFile(hCamera, _str_to_string_buffer(sFileName)) + SetLastError(err_code) + return err_code + +def CameraLoadParameter(hCamera, iTeam): + err_code = _sdk.CameraLoadParameter(hCamera, iTeam) + SetLastError(err_code) + return err_code + +def CameraGetCurrentParameterGroup(hCamera): + piTeam = c_int() + err_code = _sdk.CameraGetCurrentParameterGroup(hCamera, byref(piTeam)) + SetLastError(err_code) + return piTeam.value + +def CameraSetTransPackLen(hCamera, iPackSel): + err_code = _sdk.CameraSetTransPackLen(hCamera, iPackSel) + SetLastError(err_code) + return err_code + +def CameraGetTransPackLen(hCamera): + piPackSel = c_int() + err_code = _sdk.CameraGetTransPackLen(hCamera, byref(piPackSel)) + SetLastError(err_code) + return piPackSel.value + +def CameraIsAeWinVisible(hCamera): + pbIsVisible = c_int() + err_code = _sdk.CameraIsAeWinVisible(hCamera, byref(pbIsVisible)) + SetLastError(err_code) + return pbIsVisible.value + +def CameraSetAeWinVisible(hCamera, bIsVisible): + err_code = _sdk.CameraSetAeWinVisible(hCamera, bIsVisible) + SetLastError(err_code) + return err_code + +def CameraGetAeWindow(hCamera): + piHOff = c_int() + piVOff = c_int() + piWidth = c_int() + piHeight = c_int() + err_code = _sdk.CameraGetAeWindow(hCamera, byref(piHOff), byref(piVOff), byref(piWidth), byref(piHeight)) + SetLastError(err_code) + return (piHOff.value, piVOff.value, piWidth.value, piHeight.value) + +def CameraSetAeWindow(hCamera, iHOff, iVOff, iWidth, iHeight): + err_code = _sdk.CameraSetAeWindow(hCamera, iHOff, iVOff, iWidth, iHeight) + SetLastError(err_code) + return err_code + +def CameraSetMirror(hCamera, iDir, bEnable): + err_code = _sdk.CameraSetMirror(hCamera, iDir, bEnable) + SetLastError(err_code) + return err_code + +def CameraGetMirror(hCamera, iDir): + pbEnable = c_int() + err_code = _sdk.CameraGetMirror(hCamera, iDir, byref(pbEnable)) + SetLastError(err_code) + return pbEnable.value + +def CameraSetRotate(hCamera, iRot): + err_code = _sdk.CameraSetRotate(hCamera, iRot) + SetLastError(err_code) + return err_code + +def CameraGetRotate(hCamera): + iRot = c_int() + err_code = _sdk.CameraGetRotate(hCamera, byref(iRot)) + SetLastError(err_code) + return iRot.value + +def CameraGetWbWindow(hCamera): + PiHOff = c_int() + PiVOff = c_int() + PiWidth = c_int() + PiHeight = c_int() + err_code = _sdk.CameraGetWbWindow(hCamera, byref(PiHOff), byref(PiVOff), byref(PiWidth), byref(PiHeight)) + SetLastError(err_code) + return (PiHOff.value, PiVOff.value, PiWidth.value, PiHeight.value) + +def CameraSetWbWindow(hCamera, iHOff, iVOff, iWidth, iHeight): + err_code = _sdk.CameraSetWbWindow(hCamera, iHOff, iVOff, iWidth, iHeight) + SetLastError(err_code) + return err_code + +def CameraIsWbWinVisible(hCamera): + pbShow = c_int() + err_code = _sdk.CameraIsWbWinVisible(hCamera, byref(pbShow)) + SetLastError(err_code) + return pbShow.value + +def CameraSetWbWinVisible(hCamera, bShow): + err_code = _sdk.CameraSetWbWinVisible(hCamera, bShow) + SetLastError(err_code) + return err_code + +def CameraImageOverlay(hCamera, pRgbBuffer, pFrInfo): + err_code = _sdk.CameraImageOverlay(hCamera, c_void_p(pRgbBuffer), byref(pFrInfo)) + SetLastError(err_code) + return err_code + +def CameraSetCrossLine(hCamera, iLine, x, y, uColor, bVisible): + err_code = _sdk.CameraSetCrossLine(hCamera, iLine, x, y, uColor, bVisible) + SetLastError(err_code) + return err_code + +def CameraGetCrossLine(hCamera, iLine): + px = c_int() + py = c_int() + pcolor = c_uint() + pbVisible = c_int() + err_code = _sdk.CameraGetCrossLine(hCamera, iLine, byref(px), byref(py), byref(pcolor), byref(pbVisible)) + SetLastError(err_code) + return (px.value, py.value, pcolor.value, pbVisible.value) + +def CameraGetCapability(hCamera): + pCameraInfo = tSdkCameraCapbility() + err_code = _sdk.CameraGetCapability(hCamera, byref(pCameraInfo)) + SetLastError(err_code) + return pCameraInfo + +def CameraWriteSN(hCamera, pbySN, iLevel): + err_code = _sdk.CameraWriteSN(hCamera, _str_to_string_buffer(pbySN), iLevel) + SetLastError(err_code) + return err_code + +def CameraReadSN(hCamera, iLevel): + pbySN = create_string_buffer(64) + err_code = _sdk.CameraReadSN(hCamera, pbySN, iLevel) + SetLastError(err_code) + return _string_buffer_to_str(pbySN) + +def CameraSetTriggerDelayTime(hCamera, uDelayTimeUs): + err_code = _sdk.CameraSetTriggerDelayTime(hCamera, uDelayTimeUs) + SetLastError(err_code) + return err_code + +def CameraGetTriggerDelayTime(hCamera): + puDelayTimeUs = c_uint() + err_code = _sdk.CameraGetTriggerDelayTime(hCamera, byref(puDelayTimeUs)) + SetLastError(err_code) + return puDelayTimeUs.value + +def CameraSetTriggerCount(hCamera, iCount): + err_code = _sdk.CameraSetTriggerCount(hCamera, iCount) + SetLastError(err_code) + return err_code + +def CameraGetTriggerCount(hCamera): + piCount = c_int() + err_code = _sdk.CameraGetTriggerCount(hCamera, byref(piCount)) + SetLastError(err_code) + return piCount.value + +def CameraSoftTrigger(hCamera): + err_code = _sdk.CameraSoftTrigger(hCamera) + SetLastError(err_code) + return err_code + +def CameraSetTriggerMode(hCamera, iModeSel): + err_code = _sdk.CameraSetTriggerMode(hCamera, iModeSel) + SetLastError(err_code) + return err_code + +def CameraGetTriggerMode(hCamera): + piModeSel = c_int() + err_code = _sdk.CameraGetTriggerMode(hCamera, byref(piModeSel)) + SetLastError(err_code) + return piModeSel.value + +def CameraSetStrobeMode(hCamera, iMode): + err_code = _sdk.CameraSetStrobeMode(hCamera, iMode) + SetLastError(err_code) + return err_code + +def CameraGetStrobeMode(hCamera): + piMode = c_int() + err_code = _sdk.CameraGetStrobeMode(hCamera, byref(piMode)) + SetLastError(err_code) + return piMode.value + +def CameraSetStrobeDelayTime(hCamera, uDelayTimeUs): + err_code = _sdk.CameraSetStrobeDelayTime(hCamera, uDelayTimeUs) + SetLastError(err_code) + return err_code + +def CameraGetStrobeDelayTime(hCamera): + upDelayTimeUs = c_uint() + err_code = _sdk.CameraGetStrobeDelayTime(hCamera, byref(upDelayTimeUs)) + SetLastError(err_code) + return upDelayTimeUs.value + +def CameraSetStrobePulseWidth(hCamera, uTimeUs): + err_code = _sdk.CameraSetStrobePulseWidth(hCamera, uTimeUs) + SetLastError(err_code) + return err_code + +def CameraGetStrobePulseWidth(hCamera): + upTimeUs = c_uint() + err_code = _sdk.CameraGetStrobePulseWidth(hCamera, byref(upTimeUs)) + SetLastError(err_code) + return upTimeUs.value + +def CameraSetStrobePolarity(hCamera, uPolarity): + err_code = _sdk.CameraSetStrobePolarity(hCamera, uPolarity) + SetLastError(err_code) + return err_code + +def CameraGetStrobePolarity(hCamera): + upPolarity = c_uint() + err_code = _sdk.CameraGetStrobePolarity(hCamera, byref(upPolarity)) + SetLastError(err_code) + return upPolarity.value + +def CameraSetExtTrigSignalType(hCamera, iType): + err_code = _sdk.CameraSetExtTrigSignalType(hCamera, iType) + SetLastError(err_code) + return err_code + +def CameraGetExtTrigSignalType(hCamera): + ipType = c_int() + err_code = _sdk.CameraGetExtTrigSignalType(hCamera, byref(ipType)) + SetLastError(err_code) + return ipType.value + +def CameraSetExtTrigShutterType(hCamera, iType): + err_code = _sdk.CameraSetExtTrigShutterType(hCamera, iType) + SetLastError(err_code) + return err_code + +def CameraGetExtTrigShutterType(hCamera): + ipType = c_int() + err_code = _sdk.CameraGetExtTrigShutterType(hCamera, byref(ipType)) + SetLastError(err_code) + return ipType.value + +def CameraSetExtTrigDelayTime(hCamera, uDelayTimeUs): + err_code = _sdk.CameraSetExtTrigDelayTime(hCamera, uDelayTimeUs) + SetLastError(err_code) + return err_code + +def CameraGetExtTrigDelayTime(hCamera): + upDelayTimeUs = c_uint() + err_code = _sdk.CameraGetExtTrigDelayTime(hCamera, byref(upDelayTimeUs)) + SetLastError(err_code) + return upDelayTimeUs.value + +def CameraSetExtTrigJitterTime(hCamera, uTimeUs): + err_code = _sdk.CameraSetExtTrigJitterTime(hCamera, uTimeUs) + SetLastError(err_code) + return err_code + +def CameraGetExtTrigJitterTime(hCamera): + upTimeUs = c_uint() + err_code = _sdk.CameraGetExtTrigJitterTime(hCamera, byref(upTimeUs)) + SetLastError(err_code) + return upTimeUs.value + +def CameraGetExtTrigCapability(hCamera): + puCapabilityMask = c_uint() + err_code = _sdk.CameraGetExtTrigCapability(hCamera, byref(puCapabilityMask)) + SetLastError(err_code) + return puCapabilityMask.value + +def CameraPauseLevelTrigger(hCamera): + err_code = _sdk.CameraPauseLevelTrigger(hCamera) + SetLastError(err_code) + return err_code + +def CameraGetResolutionForSnap(hCamera): + pImageResolution = tSdkImageResolution() + err_code = _sdk.CameraGetResolutionForSnap(hCamera, byref(pImageResolution)) + SetLastError(err_code) + return pImageResolution + +def CameraSetResolutionForSnap(hCamera, pImageResolution): + err_code = _sdk.CameraSetResolutionForSnap(hCamera, byref(pImageResolution)) + SetLastError(err_code) + return err_code + +def CameraCustomizeResolution(hCamera): + pImageCustom = tSdkImageResolution() + err_code = _sdk.CameraCustomizeResolution(hCamera, byref(pImageCustom)) + SetLastError(err_code) + return pImageCustom + +def CameraCustomizeReferWin(hCamera, iWinType, hParent): + piHOff = c_int() + piVOff = c_int() + piWidth = c_int() + piHeight = c_int() + err_code = _sdk.CameraCustomizeReferWin(hCamera, iWinType, hParent, byref(piHOff), byref(piVOff), byref(piWidth), byref(piHeight)) + SetLastError(err_code) + return (piHOff.value, piVOff.value, piWidth.value, piHeight.value) + +def CameraShowSettingPage(hCamera, bShow): + err_code = _sdk.CameraShowSettingPage(hCamera, bShow) + SetLastError(err_code) + return err_code + +def CameraCreateSettingPage(hCamera, hParent, pWinText, pCallbackFunc = None, pCallbackCtx = 0, uReserved = 0): + err_code = _sdk.CameraCreateSettingPage(hCamera, hParent, _str_to_string_buffer(pWinText), pCallbackFunc, c_void_p(pCallbackCtx), uReserved) + SetLastError(err_code) + return err_code + +def CameraCreateSettingPageEx(hCamera): + err_code = _sdk.CameraCreateSettingPageEx(hCamera) + SetLastError(err_code) + return err_code + +def CameraSetActiveSettingSubPage(hCamera, index): + err_code = _sdk.CameraSetActiveSettingSubPage(hCamera, index) + SetLastError(err_code) + return err_code + +def CameraSetSettingPageParent(hCamera, hParentWnd, Flags): + err_code = _sdk.CameraSetSettingPageParent(hCamera, hParentWnd, Flags) + SetLastError(err_code) + return err_code + +def CameraGetSettingPageHWnd(hCamera): + hWnd = c_void_p() + err_code = _sdk.CameraGetSettingPageHWnd(hCamera, byref(hWnd)) + SetLastError(err_code) + return hWnd.value + +def CameraSpecialControl(hCamera, dwCtrlCode, dwParam, lpData): + err_code = _sdk.CameraSpecialControl(hCamera, dwCtrlCode, dwParam, c_void_p(lpData) ) + SetLastError(err_code) + return err_code + +def CameraGetFrameStatistic(hCamera): + psFrameStatistic = tSdkFrameStatistic() + err_code = _sdk.CameraGetFrameStatistic(hCamera, byref(psFrameStatistic)) + SetLastError(err_code) + return psFrameStatistic + +def CameraSetNoiseFilter(hCamera, bEnable): + err_code = _sdk.CameraSetNoiseFilter(hCamera, bEnable) + SetLastError(err_code) + return err_code + +def CameraGetNoiseFilterState(hCamera): + pEnable = c_int() + err_code = _sdk.CameraGetNoiseFilterState(hCamera, byref(pEnable)) + SetLastError(err_code) + return pEnable.value + +def CameraRstTimeStamp(hCamera): + err_code = _sdk.CameraRstTimeStamp(hCamera) + SetLastError(err_code) + return err_code + +def CameraSaveUserData(hCamera, uStartAddr, pbData): + err_code = _sdk.CameraSaveUserData(hCamera, uStartAddr, pbData, len(pbData)) + SetLastError(err_code) + return err_code + +def CameraLoadUserData(hCamera, uStartAddr, ilen): + pbData = create_string_buffer(ilen) + err_code = _sdk.CameraLoadUserData(hCamera, uStartAddr, pbData, ilen) + SetLastError(err_code) + return pbData[:] + +def CameraGetFriendlyName(hCamera): + pName = create_string_buffer(64) + err_code = _sdk.CameraGetFriendlyName(hCamera, pName) + SetLastError(err_code) + return _string_buffer_to_str(pName) + +def CameraSetFriendlyName(hCamera, pName): + pNameBuf = _str_to_string_buffer(pName) + resize(pNameBuf, 64) + err_code = _sdk.CameraSetFriendlyName(hCamera, pNameBuf) + SetLastError(err_code) + return err_code + +def CameraSdkGetVersionString(): + pVersionString = create_string_buffer(64) + err_code = _sdk.CameraSdkGetVersionString(pVersionString) + SetLastError(err_code) + return _string_buffer_to_str(pVersionString) + +def CameraCheckFwUpdate(hCamera): + pNeedUpdate = c_int() + err_code = _sdk.CameraCheckFwUpdate(hCamera, byref(pNeedUpdate)) + SetLastError(err_code) + return pNeedUpdate.value + +def CameraGetFirmwareVersion(hCamera): + pVersion = create_string_buffer(64) + err_code = _sdk.CameraGetFirmwareVersion(hCamera, pVersion) + SetLastError(err_code) + return _string_buffer_to_str(pVersion) + +def CameraGetEnumInfo(hCamera): + pCameraInfo = tSdkCameraDevInfo() + err_code = _sdk.CameraGetEnumInfo(hCamera, byref(pCameraInfo)) + SetLastError(err_code) + return pCameraInfo + +def CameraGetInerfaceVersion(hCamera): + pVersion = create_string_buffer(64) + err_code = _sdk.CameraGetInerfaceVersion(hCamera, pVersion) + SetLastError(err_code) + return _string_buffer_to_str(pVersion) + +def CameraSetIOState(hCamera, iOutputIOIndex, uState): + err_code = _sdk.CameraSetIOState(hCamera, iOutputIOIndex, uState) + SetLastError(err_code) + return err_code + +def CameraSetIOStateEx(hCamera, iOutputIOIndex, uState): + err_code = _sdk.CameraSetIOStateEx(hCamera, iOutputIOIndex, uState) + SetLastError(err_code) + return err_code + +def CameraGetOutPutIOState(hCamera, iOutputIOIndex): + puState = c_int() + err_code = _sdk.CameraGetOutPutIOState(hCamera, iOutputIOIndex, byref(puState)) + SetLastError(err_code) + return puState.value + +def CameraGetOutPutIOStateEx(hCamera, iOutputIOIndex): + puState = c_int() + err_code = _sdk.CameraGetOutPutIOStateEx(hCamera, iOutputIOIndex, byref(puState)) + SetLastError(err_code) + return puState.value + +def CameraGetIOState(hCamera, iInputIOIndex): + puState = c_int() + err_code = _sdk.CameraGetIOState(hCamera, iInputIOIndex, byref(puState)) + SetLastError(err_code) + return puState.value + +def CameraGetIOStateEx(hCamera, iInputIOIndex): + puState = c_int() + err_code = _sdk.CameraGetIOStateEx(hCamera, iInputIOIndex, byref(puState)) + SetLastError(err_code) + return puState.value + +def CameraSetInPutIOMode(hCamera, iInputIOIndex, iMode): + err_code = _sdk.CameraSetInPutIOMode(hCamera, iInputIOIndex, iMode) + SetLastError(err_code) + return err_code + +def CameraSetOutPutIOMode(hCamera, iOutputIOIndex, iMode): + err_code = _sdk.CameraSetOutPutIOMode(hCamera, iOutputIOIndex, iMode) + SetLastError(err_code) + return err_code + +def CameraSetOutPutPWM(hCamera, iOutputIOIndex, iCycle, uDuty): + err_code = _sdk.CameraSetOutPutPWM(hCamera, iOutputIOIndex, iCycle, uDuty) + SetLastError(err_code) + return err_code + +def CameraSetAeAlgorithm(hCamera, iIspProcessor, iAeAlgorithmSel): + err_code = _sdk.CameraSetAeAlgorithm(hCamera, iIspProcessor, iAeAlgorithmSel) + SetLastError(err_code) + return err_code + +def CameraGetAeAlgorithm(hCamera, iIspProcessor): + piAlgorithmSel = c_int() + err_code = _sdk.CameraGetAeAlgorithm(hCamera, iIspProcessor, byref(piAlgorithmSel)) + SetLastError(err_code) + return piAlgorithmSel.value + +def CameraSetBayerDecAlgorithm(hCamera, iIspProcessor, iAlgorithmSel): + err_code = _sdk.CameraSetBayerDecAlgorithm(hCamera, iIspProcessor, iAlgorithmSel) + SetLastError(err_code) + return err_code + +def CameraGetBayerDecAlgorithm(hCamera, iIspProcessor): + piAlgorithmSel = c_int() + err_code = _sdk.CameraGetBayerDecAlgorithm(hCamera, iIspProcessor, byref(piAlgorithmSel)) + SetLastError(err_code) + return piAlgorithmSel.value + +def CameraSetIspProcessor(hCamera, iIspProcessor): + err_code = _sdk.CameraSetIspProcessor(hCamera, iIspProcessor) + SetLastError(err_code) + return err_code + +def CameraGetIspProcessor(hCamera): + piIspProcessor = c_int() + err_code = _sdk.CameraGetIspProcessor(hCamera, byref(piIspProcessor)) + SetLastError(err_code) + return piIspProcessor.value + +def CameraSetBlackLevel(hCamera, iBlackLevel): + err_code = _sdk.CameraSetBlackLevel(hCamera, iBlackLevel) + SetLastError(err_code) + return err_code + +def CameraGetBlackLevel(hCamera): + piBlackLevel = c_int() + err_code = _sdk.CameraGetBlackLevel(hCamera, byref(piBlackLevel)) + SetLastError(err_code) + return piBlackLevel.value + +def CameraSetWhiteLevel(hCamera, iWhiteLevel): + err_code = _sdk.CameraSetWhiteLevel(hCamera, iWhiteLevel) + SetLastError(err_code) + return err_code + +def CameraGetWhiteLevel(hCamera): + piWhiteLevel = c_int() + err_code = _sdk.CameraGetWhiteLevel(hCamera, byref(piWhiteLevel)) + SetLastError(err_code) + return piWhiteLevel.value + +def CameraSetIspOutFormat(hCamera, uFormat): + err_code = _sdk.CameraSetIspOutFormat(hCamera, uFormat) + SetLastError(err_code) + return err_code + +def CameraGetIspOutFormat(hCamera): + puFormat = c_int() + err_code = _sdk.CameraGetIspOutFormat(hCamera, byref(puFormat)) + SetLastError(err_code) + return puFormat.value + +def CameraGetErrorString(iStatusCode): + _sdk.CameraGetErrorString.restype = c_char_p + msg = _sdk.CameraGetErrorString(iStatusCode) + if msg: + return _string_buffer_to_str(msg) + else: + return '' + +def CameraGetImageBufferEx2(hCamera, pImageData, uOutFormat, wTimes): + piWidth = c_int() + piHeight = c_int() + err_code = _sdk.CameraGetImageBufferEx2(hCamera, c_void_p(pImageData), uOutFormat, byref(piWidth), byref(piHeight), wTimes) + SetLastError(err_code) + if err_code != 0: + raise CameraException(err_code) + return (piWidth.value, piHeight.value) + +def CameraGetImageBufferEx3(hCamera, pImageData, uOutFormat, wTimes): + piWidth = c_int() + piHeight = c_int() + puTimeStamp = c_int() + err_code = _sdk.CameraGetImageBufferEx3(hCamera, c_void_p(pImageData), uOutFormat, byref(piWidth), byref(piHeight), byref(puTimeStamp), wTimes) + SetLastError(err_code) + if err_code != 0: + raise CameraException(err_code) + return (piWidth.value, piHeight.value, puTimeStamp.value) + +def CameraGetCapabilityEx2(hCamera): + pMaxWidth = c_int() + pMaxHeight = c_int() + pbColorCamera = c_int() + err_code = _sdk.CameraGetCapabilityEx2(hCamera, byref(pMaxWidth), byref(pMaxHeight), byref(pbColorCamera)) + SetLastError(err_code) + return (pMaxWidth.value, pMaxHeight.value, pbColorCamera.value) + +def CameraReConnect(hCamera): + err_code = _sdk.CameraReConnect(hCamera) + SetLastError(err_code) + return err_code + +def CameraConnectTest(hCamera): + err_code = _sdk.CameraConnectTest(hCamera) + SetLastError(err_code) + return err_code + +def CameraSetLedEnable(hCamera, index, enable): + err_code = _sdk.CameraSetLedEnable(hCamera, index, enable) + SetLastError(err_code) + return err_code + +def CameraGetLedEnable(hCamera, index): + enable = c_int() + err_code = _sdk.CameraGetLedEnable(hCamera, index, byref(enable)) + SetLastError(err_code) + return enable.value + +def CameraSetLedOnOff(hCamera, index, onoff): + err_code = _sdk.CameraSetLedOnOff(hCamera, index, onoff) + SetLastError(err_code) + return err_code + +def CameraGetLedOnOff(hCamera, index): + onoff = c_int() + err_code = _sdk.CameraGetLedOnOff(hCamera, index, byref(onoff)) + SetLastError(err_code) + return onoff.value + +def CameraSetLedDuration(hCamera, index, duration): + err_code = _sdk.CameraSetLedDuration(hCamera, index, duration) + SetLastError(err_code) + return err_code + +def CameraGetLedDuration(hCamera, index): + duration = c_uint() + err_code = _sdk.CameraGetLedDuration(hCamera, index, byref(duration)) + SetLastError(err_code) + return duration.value + +def CameraSetLedBrightness(hCamera, index, uBrightness): + err_code = _sdk.CameraSetLedBrightness(hCamera, index, uBrightness) + SetLastError(err_code) + return err_code + +def CameraGetLedBrightness(hCamera, index): + uBrightness = c_uint() + err_code = _sdk.CameraGetLedBrightness(hCamera, index, byref(uBrightness)) + SetLastError(err_code) + return uBrightness.value + +def CameraEnableTransferRoi(hCamera, uEnableMask): + err_code = _sdk.CameraEnableTransferRoi(hCamera, uEnableMask) + SetLastError(err_code) + return err_code + +def CameraSetTransferRoi(hCamera, index, X1, Y1, X2, Y2): + err_code = _sdk.CameraSetTransferRoi(hCamera, index, X1, Y1, X2, Y2) + SetLastError(err_code) + return err_code + +def CameraGetTransferRoi(hCamera, index): + pX1 = c_uint() + pY1 = c_uint() + pX2 = c_uint() + pY2 = c_uint() + err_code = _sdk.CameraGetTransferRoi(hCamera, index, byref(pX1), byref(pY1), byref(pX2), byref(pY2)) + SetLastError(err_code) + return (pX1.value, pY1.value, pX2.value, pY2.value) + +def CameraAlignMalloc(size, align = 16): + _sdk.CameraAlignMalloc.restype = c_void_p + r = _sdk.CameraAlignMalloc(size, align) + return r + +def CameraAlignFree(membuffer): + _sdk.CameraAlignFree(c_void_p(membuffer)) + +def CameraSetAutoConnect(hCamera, bEnable): + err_code = _sdk.CameraSetAutoConnect(hCamera, bEnable) + SetLastError(err_code) + return err_code + +def CameraGetAutoConnect(hCamera): + pbEnable = c_int() + err_code = _sdk.CameraGetAutoConnect(hCamera, byref(pbEnable)) + SetLastError(err_code) + return pbEnable.value + +def CameraGetReConnectCounts(hCamera): + puCounts = c_int() + err_code = _sdk.CameraGetReConnectCounts(hCamera, byref(puCounts)) + SetLastError(err_code) + return puCounts.value + +def CameraSetSingleGrabMode(hCamera, bEnable): + err_code = _sdk.CameraSetSingleGrabMode(hCamera, bEnable) + SetLastError(err_code) + return err_code + +def CameraGetSingleGrabMode(hCamera): + pbEnable = c_int() + err_code = _sdk.CameraGetSingleGrabMode(hCamera, byref(pbEnable)) + SetLastError(err_code) + return pbEnable.value + +def CameraRestartGrab(hCamera): + err_code = _sdk.CameraRestartGrab(hCamera) + SetLastError(err_code) + return err_code + +def CameraEvaluateImageDefinition(hCamera, iAlgorithSel, pbyIn, pFrInfo): + DefinitionValue = c_double() + err_code = _sdk.CameraEvaluateImageDefinition(hCamera, iAlgorithSel, c_void_p(pbyIn), byref(pFrInfo), byref(DefinitionValue)) + SetLastError(err_code) + return DefinitionValue.value + +def CameraDrawText(pRgbBuffer, pFrInfo, pFontFileName, FontWidth, FontHeight, pText, Left, Top, Width, Height, TextColor, uFlags): + err_code = _sdk.CameraDrawText(c_void_p(pRgbBuffer), byref(pFrInfo), _str_to_string_buffer(pFontFileName), FontWidth, FontHeight, _str_to_string_buffer(pText), Left, Top, Width, Height, TextColor, uFlags) + SetLastError(err_code) + return err_code + +def CameraGigeEnumerateDevice(ipList, MaxCount = 32): + if type(ipList) in (list, tuple): + ipList = map(lambda x: _str_to_string_buffer(x), ipList) + else: + ipList = (_str_to_string_buffer(ipList),) + numIP = len(ipList) + ppIpList = (c_void_p * numIP)(*map(lambda x: addressof(x), ipList)) + Nums = c_int(MaxCount) + pCameraList = (tSdkCameraDevInfo * Nums.value)() + err_code = _sdk.CameraGigeEnumerateDevice(ppIpList, numIP, pCameraList, byref(Nums)) + SetLastError(err_code) + return pCameraList[0:Nums.value] + +def CameraGigeGetIp(pCameraInfo): + CamIp = create_string_buffer(32) + CamMask = create_string_buffer(32) + CamGateWay = create_string_buffer(32) + EtIp = create_string_buffer(32) + EtMask = create_string_buffer(32) + EtGateWay = create_string_buffer(32) + err_code = _sdk.CameraGigeGetIp(byref(pCameraInfo), CamIp, CamMask, CamGateWay, EtIp, EtMask, EtGateWay) + SetLastError(err_code) + return (_string_buffer_to_str(CamIp), _string_buffer_to_str(CamMask), _string_buffer_to_str(CamGateWay), + _string_buffer_to_str(EtIp), _string_buffer_to_str(EtMask), _string_buffer_to_str(EtGateWay) ) + +def CameraGigeSetIp(pCameraInfo, Ip, SubMask, GateWay, bPersistent): + err_code = _sdk.CameraGigeSetIp(byref(pCameraInfo), + _str_to_string_buffer(Ip), _str_to_string_buffer(SubMask), _str_to_string_buffer(GateWay), bPersistent) + SetLastError(err_code) + return err_code + +def CameraGigeGetMac(pCameraInfo): + CamMac = create_string_buffer(32) + EtMac = create_string_buffer(32) + err_code = _sdk.CameraGigeGetMac(byref(pCameraInfo), CamMac, EtMac) + SetLastError(err_code) + return (_string_buffer_to_str(CamMac), _string_buffer_to_str(EtMac) ) + +def CameraEnableFastResponse(hCamera): + err_code = _sdk.CameraEnableFastResponse(hCamera) + SetLastError(err_code) + return err_code + +def CameraSetCorrectDeadPixel(hCamera, bEnable): + err_code = _sdk.CameraSetCorrectDeadPixel(hCamera, bEnable) + SetLastError(err_code) + return err_code + +def CameraGetCorrectDeadPixel(hCamera): + pbEnable = c_int() + err_code = _sdk.CameraGetCorrectDeadPixel(hCamera, byref(pbEnable)) + SetLastError(err_code) + return pbEnable.value + +def CameraFlatFieldingCorrectSetEnable(hCamera, bEnable): + err_code = _sdk.CameraFlatFieldingCorrectSetEnable(hCamera, bEnable) + SetLastError(err_code) + return err_code + +def CameraFlatFieldingCorrectGetEnable(hCamera): + pbEnable = c_int() + err_code = _sdk.CameraFlatFieldingCorrectGetEnable(hCamera, byref(pbEnable)) + SetLastError(err_code) + return pbEnable.value + +def CameraFlatFieldingCorrectSetParameter(hCamera, pDarkFieldingImage, pDarkFieldingFrInfo, pLightFieldingImage, pLightFieldingFrInfo): + err_code = _sdk.CameraFlatFieldingCorrectSetParameter(hCamera, c_void_p(pDarkFieldingImage), byref(pDarkFieldingFrInfo), c_void_p(pLightFieldingImage), byref(pLightFieldingFrInfo)) + SetLastError(err_code) + return err_code + +def CameraFlatFieldingCorrectGetParameterState(hCamera): + pbValid = c_int() + pFilePath = create_string_buffer(1024) + err_code = _sdk.CameraFlatFieldingCorrectGetParameterState(hCamera, byref(pbValid), pFilePath) + SetLastError(err_code) + return (pbValid.value, _string_buffer_to_str(pFilePath) ) + +def CameraFlatFieldingCorrectSaveParameterToFile(hCamera, pszFileName): + err_code = _sdk.CameraFlatFieldingCorrectSaveParameterToFile(hCamera, _str_to_string_buffer(pszFileName)) + SetLastError(err_code) + return err_code + +def CameraFlatFieldingCorrectLoadParameterFromFile(hCamera, pszFileName): + err_code = _sdk.CameraFlatFieldingCorrectLoadParameterFromFile(hCamera, _str_to_string_buffer(pszFileName)) + SetLastError(err_code) + return err_code + +def CameraCommonCall(hCamera, pszCall, uResultBufSize): + pszResult = create_string_buffer(uResultBufSize) if uResultBufSize > 0 else None + err_code = _sdk.CameraCommonCall(hCamera, _str_to_string_buffer(pszCall), pszResult, uResultBufSize) + SetLastError(err_code) + return _string_buffer_to_str(pszResult) if pszResult else '' + +def CameraSetDenoise3DParams(hCamera, bEnable, nCount, Weights): + assert(nCount >= 2 and nCount <= 8) + if Weights: + assert(len(Weights) == nCount) + WeightsNative = (c_float * nCount)(*Weights) + else: + WeightsNative = None + err_code = _sdk.CameraSetDenoise3DParams(hCamera, bEnable, nCount, WeightsNative) + SetLastError(err_code) + return err_code + +def CameraGetDenoise3DParams(hCamera): + bEnable = c_int() + nCount = c_int() + bUseWeight = c_int() + Weights = (c_float * 8)() + err_code = _sdk.CameraGetDenoise3DParams(hCamera, byref(bEnable), byref(nCount), byref(bUseWeight), Weights) + SetLastError(err_code) + bEnable, nCount, bUseWeight = bEnable.value, nCount.value, bUseWeight.value + if bUseWeight: + Weights = Weights[:nCount] + else: + Weights = None + return (bEnable, nCount, bUseWeight, Weights) + +def CameraManualDenoise3D(InFramesHead, InFramesData, nCount, Weights, OutFrameHead, OutFrameData): + assert(nCount > 0) + assert(len(InFramesData) == nCount) + assert(Weights is None or len(Weights) == nCount) + InFramesDataNative = (c_void_p * nCount)(*InFramesData) + WeightsNative = (c_float * nCount)(*Weights) if Weights else None + err_code = _sdk.CameraManualDenoise3D(byref(InFramesHead), InFramesDataNative, nCount, WeightsNative, byref(OutFrameHead), c_void_p(OutFrameData)) + SetLastError(err_code) + return err_code + +def CameraCustomizeDeadPixels(hCamera, hParent): + err_code = _sdk.CameraCustomizeDeadPixels(hCamera, hParent) + SetLastError(err_code) + return err_code + +def CameraReadDeadPixels(hCamera): + pNumPixel = c_int() + err_code = _sdk.CameraReadDeadPixels(hCamera, None, None, byref(pNumPixel)) + SetLastError(err_code) + if pNumPixel.value < 1: + return None + UShortArray = c_ushort * pNumPixel.value + pRows = UShortArray() + pCols = UShortArray() + err_code = _sdk.CameraReadDeadPixels(hCamera, pRows, pCols, byref(pNumPixel)) + SetLastError(err_code) + if err_code == 0: + pNumPixel = pNumPixel.value + else: + pNumPixel = 0 + return (pRows[:pNumPixel], pCols[:pNumPixel]) + +def CameraAddDeadPixels(hCamera, pRows, pCols, NumPixel): + UShortArray = c_ushort * NumPixel + pRowsNative = UShortArray(*pRows) + pColsNative = UShortArray(*pCols) + err_code = _sdk.CameraAddDeadPixels(hCamera, pRowsNative, pColsNative, NumPixel) + SetLastError(err_code) + return err_code + +def CameraRemoveDeadPixels(hCamera, pRows, pCols, NumPixel): + UShortArray = c_ushort * NumPixel + pRowsNative = UShortArray(*pRows) + pColsNative = UShortArray(*pCols) + err_code = _sdk.CameraRemoveDeadPixels(hCamera, pRowsNative, pColsNative, NumPixel) + SetLastError(err_code) + return err_code + +def CameraRemoveAllDeadPixels(hCamera): + err_code = _sdk.CameraRemoveAllDeadPixels(hCamera) + SetLastError(err_code) + return err_code + +def CameraSaveDeadPixels(hCamera): + err_code = _sdk.CameraSaveDeadPixels(hCamera) + SetLastError(err_code) + return err_code + +def CameraSaveDeadPixelsToFile(hCamera, sFileName): + err_code = _sdk.CameraSaveDeadPixelsToFile(hCamera, _str_to_string_buffer(sFileName)) + SetLastError(err_code) + return err_code + +def CameraLoadDeadPixelsFromFile(hCamera, sFileName): + err_code = _sdk.CameraLoadDeadPixelsFromFile(hCamera, _str_to_string_buffer(sFileName)) + SetLastError(err_code) + return err_code + +def CameraGetImageBufferPriority(hCamera, wTimes, Priority): + pFrameInfo = tSdkFrameHead() + pbyBuffer = c_void_p() + err_code = _sdk.CameraGetImageBufferPriority(hCamera, byref(pFrameInfo), byref(pbyBuffer), wTimes, Priority) + SetLastError(err_code) + if err_code != 0: + raise CameraException(err_code) + return (pbyBuffer.value, pFrameInfo) + +def CameraGetImageBufferPriorityEx(hCamera, wTimes, Priority): + _sdk.CameraGetImageBufferPriorityEx.restype = c_void_p + piWidth = c_int() + piHeight = c_int() + pFrameBuffer = _sdk.CameraGetImageBufferPriorityEx(hCamera, byref(piWidth), byref(piHeight), wTimes, Priority) + err_code = CAMERA_STATUS_SUCCESS if pFrameBuffer else CAMERA_STATUS_TIME_OUT + SetLastError(err_code) + if pFrameBuffer: + return (pFrameBuffer, piWidth.value, piHeight.value) + else: + raise CameraException(err_code) + +def CameraGetImageBufferPriorityEx2(hCamera, pImageData, uOutFormat, wTimes, Priority): + piWidth = c_int() + piHeight = c_int() + err_code = _sdk.CameraGetImageBufferPriorityEx2(hCamera, c_void_p(pImageData), uOutFormat, byref(piWidth), byref(piHeight), wTimes, Priority) + SetLastError(err_code) + if err_code != 0: + raise CameraException(err_code) + return (piWidth.value, piHeight.value) + +def CameraGetImageBufferPriorityEx3(hCamera, pImageData, uOutFormat, wTimes, Priority): + piWidth = c_int() + piHeight = c_int() + puTimeStamp = c_uint() + err_code = _sdk.CameraGetImageBufferPriorityEx3(hCamera, c_void_p(pImageData), uOutFormat, byref(piWidth), byref(piHeight), byref(puTimeStamp), wTimes, Priority) + SetLastError(err_code) + if err_code != 0: + raise CameraException(err_code) + return (piWidth.value, piHeight.value, puTimeStamp.value) + +def CameraClearBuffer(hCamera): + err_code = _sdk.CameraClearBuffer(hCamera) + SetLastError(err_code) + return err_code + +def CameraSoftTriggerEx(hCamera, uFlags): + err_code = _sdk.CameraSoftTriggerEx(hCamera, uFlags) + SetLastError(err_code) + return err_code + +def CameraSetHDR(hCamera, value): + err_code = _sdk.CameraSetHDR(hCamera, value) + SetLastError(err_code) + return err_code + +def CameraGetHDR(hCamera): + value = c_int() + err_code = _sdk.CameraGetHDR(hCamera, byref(value)) + SetLastError(err_code) + return value.value + +def CameraGetFrameID(hCamera): + FrameID = c_uint() + err_code = _sdk.CameraGetFrameID(hCamera, byref(FrameID)) + SetLastError(err_code) + return FrameID.value + +def CameraGetFrameTimeStamp(hCamera): + TimeStamp = c_uint64() + TimeStampL = c_uint32.from_buffer(TimeStamp) + TimeStampH = c_uint32.from_buffer(TimeStamp, 4) + err_code = _sdk.CameraGetFrameTimeStamp(hCamera, byref(TimeStampL), byref(TimeStampH)) + SetLastError(err_code) + return TimeStamp.value + +def CameraSetHDRGainMode(hCamera, value): + err_code = _sdk.CameraSetHDRGainMode(hCamera, value) + SetLastError(err_code) + return err_code + +def CameraGetHDRGainMode(hCamera): + value = c_int() + err_code = _sdk.CameraGetHDRGainMode(hCamera, byref(value)) + SetLastError(err_code) + return value.value + +def CameraCreateDIBitmap(hDC, pFrameBuffer, pFrameHead): + outBitmap = c_void_p() + err_code = _sdk.CameraCreateDIBitmap(hDC, c_void_p(pFrameBuffer), byref(pFrameHead), byref(outBitmap)) + SetLastError(err_code) + return outBitmap.value + +def CameraDrawFrameBuffer(pFrameBuffer, pFrameHead, hWnd, Algorithm, Mode): + err_code = _sdk.CameraDrawFrameBuffer(c_void_p(pFrameBuffer), byref(pFrameHead), c_void_p(hWnd), Algorithm, Mode) + SetLastError(err_code) + return err_code + +def CameraFlipFrameBuffer(pFrameBuffer, pFrameHead, Flags): + err_code = _sdk.CameraFlipFrameBuffer(c_void_p(pFrameBuffer), byref(pFrameHead), Flags) + SetLastError(err_code) + return err_code + +def CameraConvertFrameBufferFormat(hCamera, pInFrameBuffer, pOutFrameBuffer, outWidth, outHeight, outMediaType, pFrameHead): + err_code = _sdk.CameraConvertFrameBufferFormat(hCamera, c_void_p(pInFrameBuffer), c_void_p(pOutFrameBuffer), outWidth, outHeight, outMediaType, byref(pFrameHead)) + SetLastError(err_code) + return err_code + +def CameraSetConnectionStatusCallback(hCamera, pCallBack, pContext = 0): + err_code = _sdk.CameraSetConnectionStatusCallback(hCamera, pCallBack, c_void_p(pContext) ) + SetLastError(err_code) + return err_code + +def CameraSetLightingControllerMode(hCamera, index, mode): + err_code = _sdk.CameraSetLightingControllerMode(hCamera, index, mode) + SetLastError(err_code) + return err_code + +def CameraSetLightingControllerState(hCamera, index, state): + err_code = _sdk.CameraSetLightingControllerState(hCamera, index, state) + SetLastError(err_code) + return err_code + +def CameraSetFrameResendCount(hCamera, count): + err_code = _sdk.CameraSetFrameResendCount(hCamera, count) + SetLastError(err_code) + return err_code + +def CameraSetUndistortParams(hCamera, width, height, cameraMatrix, distCoeffs): + assert(len(cameraMatrix) == 4) + assert(len(distCoeffs) == 5) + cameraMatrixNative = (c_double * len(cameraMatrix))(*cameraMatrix) + distCoeffsNative = (c_double * len(distCoeffs))(*distCoeffs) + err_code = _sdk.CameraSetUndistortParams(hCamera, width, height, cameraMatrixNative, distCoeffsNative) + SetLastError(err_code) + return err_code + +def CameraGetUndistortParams(hCamera): + width = c_int() + height = c_int() + cameraMatrix = (c_double * 4)() + distCoeffs = (c_double * 5)() + err_code = _sdk.CameraGetUndistortParams(hCamera, byref(width), byref(height), cameraMatrix, distCoeffs) + SetLastError(err_code) + width, height = width.value, height.value + cameraMatrix = cameraMatrix[:] + distCoeffs = distCoeffs[:] + return (width, height, cameraMatrix, distCoeffs) + +def CameraSetUndistortEnable(hCamera, bEnable): + err_code = _sdk.CameraSetUndistortEnable(hCamera, bEnable) + SetLastError(err_code) + return err_code + +def CameraGetUndistortEnable(hCamera): + value = c_int() + err_code = _sdk.CameraGetUndistortEnable(hCamera, byref(value)) + SetLastError(err_code) + return value.value + +def CameraCustomizeUndistort(hCamera, hParent): + err_code = _sdk.CameraCustomizeUndistort(hCamera, hParent) + SetLastError(err_code) + return err_code + +def CameraGetEyeCount(hCamera): + EyeCount = c_int() + err_code = _sdk.CameraGetEyeCount(hCamera, byref(EyeCount)) + SetLastError(err_code) + return EyeCount.value + +def CameraMultiEyeImageProcess(hCamera, iEyeIndex, pbyIn, pInFrInfo, pbyOut, pOutFrInfo, uOutFormat, uReserved): + err_code = _sdk.CameraMultiEyeImageProcess(hCamera, iEyeIndex, c_void_p(pbyIn), byref(pInFrInfo), c_void_p(pbyOut), byref(pOutFrInfo), uOutFormat, uReserved) + SetLastError(err_code) + return err_code + +# CameraGrabber + +def CameraGrabber_CreateFromDevicePage(): + Grabber = c_void_p() + err_code = _sdk.CameraGrabber_CreateFromDevicePage(byref(Grabber)) + SetLastError(err_code) + if err_code != 0: + raise CameraException(err_code) + return Grabber.value + +def CameraGrabber_CreateByIndex(Index): + Grabber = c_void_p() + err_code = _sdk.CameraGrabber_CreateByIndex(byref(Grabber), Index) + SetLastError(err_code) + if err_code != 0: + raise CameraException(err_code) + return Grabber.value + +def CameraGrabber_CreateByName(Name): + Grabber = c_void_p() + err_code = _sdk.CameraGrabber_CreateByName(byref(Grabber), _str_to_string_buffer(Name)) + SetLastError(err_code) + if err_code != 0: + raise CameraException(err_code) + return Grabber.value + +def CameraGrabber_Create(pDevInfo): + Grabber = c_void_p() + err_code = _sdk.CameraGrabber_Create(byref(Grabber), byref(pDevInfo)) + SetLastError(err_code) + if err_code != 0: + raise CameraException(err_code) + return Grabber.value + +def CameraGrabber_Destroy(Grabber): + err_code = _sdk.CameraGrabber_Destroy(c_void_p(Grabber)) + SetLastError(err_code) + return err_code + +def CameraGrabber_SetHWnd(Grabber, hWnd): + err_code = _sdk.CameraGrabber_SetHWnd(c_void_p(Grabber), c_void_p(hWnd) ) + SetLastError(err_code) + return err_code + +def CameraGrabber_SetPriority(Grabber, Priority): + err_code = _sdk.CameraGrabber_SetPriority(c_void_p(Grabber), Priority) + SetLastError(err_code) + return err_code + +def CameraGrabber_StartLive(Grabber): + err_code = _sdk.CameraGrabber_StartLive(c_void_p(Grabber)) + SetLastError(err_code) + return err_code + +def CameraGrabber_StopLive(Grabber): + err_code = _sdk.CameraGrabber_StopLive(c_void_p(Grabber)) + SetLastError(err_code) + return err_code + +def CameraGrabber_SaveImage(Grabber, TimeOut): + Image = c_void_p() + err_code = _sdk.CameraGrabber_SaveImage(c_void_p(Grabber), byref(Image), TimeOut) + SetLastError(err_code) + if err_code != 0: + raise CameraException(err_code) + return Image.value + +def CameraGrabber_SaveImageAsync(Grabber): + err_code = _sdk.CameraGrabber_SaveImageAsync(c_void_p(Grabber)) + SetLastError(err_code) + return err_code + +def CameraGrabber_SaveImageAsyncEx(Grabber, UserData): + err_code = _sdk.CameraGrabber_SaveImageAsyncEx(c_void_p(Grabber), c_void_p(UserData)) + SetLastError(err_code) + return err_code + +def CameraGrabber_SetSaveImageCompleteCallback(Grabber, Callback, Context = 0): + err_code = _sdk.CameraGrabber_SetSaveImageCompleteCallback(c_void_p(Grabber), Callback, c_void_p(Context)) + SetLastError(err_code) + return err_code + +def CameraGrabber_SetFrameListener(Grabber, Listener, Context = 0): + err_code = _sdk.CameraGrabber_SetFrameListener(c_void_p(Grabber), Listener, c_void_p(Context)) + SetLastError(err_code) + return err_code + +def CameraGrabber_SetRawCallback(Grabber, Callback, Context = 0): + err_code = _sdk.CameraGrabber_SetRawCallback(c_void_p(Grabber), Callback, c_void_p(Context)) + SetLastError(err_code) + return err_code + +def CameraGrabber_SetRGBCallback(Grabber, Callback, Context = 0): + err_code = _sdk.CameraGrabber_SetRGBCallback(c_void_p(Grabber), Callback, c_void_p(Context)) + SetLastError(err_code) + return err_code + +def CameraGrabber_GetCameraHandle(Grabber): + hCamera = c_int() + err_code = _sdk.CameraGrabber_GetCameraHandle(c_void_p(Grabber), byref(hCamera)) + SetLastError(err_code) + return hCamera.value + +def CameraGrabber_GetStat(Grabber): + stat = tSdkGrabberStat() + err_code = _sdk.CameraGrabber_GetStat(c_void_p(Grabber), byref(stat)) + SetLastError(err_code) + return stat + +def CameraGrabber_GetCameraDevInfo(Grabber): + DevInfo = tSdkCameraDevInfo() + err_code = _sdk.CameraGrabber_GetCameraDevInfo(c_void_p(Grabber), byref(DevInfo)) + SetLastError(err_code) + return DevInfo + +# CameraImage + +def CameraImage_Create(pFrameBuffer, pFrameHead, bCopy): + Image = c_void_p() + err_code = _sdk.CameraImage_Create(byref(Image), c_void_p(pFrameBuffer), byref(pFrameHead), bCopy) + SetLastError(err_code) + return Image.value + +def CameraImage_CreateEmpty(): + Image = c_void_p() + err_code = _sdk.CameraImage_CreateEmpty(byref(Image)) + SetLastError(err_code) + return Image.value + +def CameraImage_Destroy(Image): + err_code = _sdk.CameraImage_Destroy(c_void_p(Image)) + SetLastError(err_code) + return err_code + +def CameraImage_GetData(Image): + DataBuffer = c_void_p() + HeadPtr = c_void_p() + err_code = _sdk.CameraImage_GetData(c_void_p(Image), byref(DataBuffer), byref(HeadPtr)) + SetLastError(err_code) + if err_code == 0: + return (DataBuffer.value, tSdkFrameHead.from_address(HeadPtr.value) ) + else: + return (0, None) + +def CameraImage_GetUserData(Image): + UserData = c_void_p() + err_code = _sdk.CameraImage_GetUserData(c_void_p(Image), byref(UserData)) + SetLastError(err_code) + return UserData.value + +def CameraImage_SetUserData(Image, UserData): + err_code = _sdk.CameraImage_SetUserData(c_void_p(Image), c_void_p(UserData)) + SetLastError(err_code) + return err_code + +def CameraImage_IsEmpty(Image): + IsEmpty = c_int() + err_code = _sdk.CameraImage_IsEmpty(c_void_p(Image), byref(IsEmpty)) + SetLastError(err_code) + return IsEmpty.value + +def CameraImage_Draw(Image, hWnd, Algorithm): + err_code = _sdk.CameraImage_Draw(c_void_p(Image), c_void_p(hWnd), Algorithm) + SetLastError(err_code) + return err_code + +def CameraImage_DrawFit(Image, hWnd, Algorithm): + err_code = _sdk.CameraImage_DrawFit(c_void_p(Image), c_void_p(hWnd), Algorithm) + SetLastError(err_code) + return err_code + +def CameraImage_DrawToDC(Image, hDC, Algorithm, xDst, yDst, cxDst, cyDst): + err_code = _sdk.CameraImage_DrawToDC(c_void_p(Image), c_void_p(hDC), Algorithm, xDst, yDst, cxDst, cyDst) + SetLastError(err_code) + return err_code + +def CameraImage_DrawToDCFit(Image, hDC, Algorithm, xDst, yDst, cxDst, cyDst): + err_code = _sdk.CameraImage_DrawToDCFit(c_void_p(Image), c_void_p(hDC), Algorithm, xDst, yDst, cxDst, cyDst) + SetLastError(err_code) + return err_code + +def CameraImage_BitBlt(Image, hWnd, xDst, yDst, cxDst, cyDst, xSrc, ySrc): + err_code = _sdk.CameraImage_BitBlt(c_void_p(Image), c_void_p(hWnd), xDst, yDst, cxDst, cyDst, xSrc, ySrc) + SetLastError(err_code) + return err_code + +def CameraImage_BitBltToDC(Image, hDC, xDst, yDst, cxDst, cyDst, xSrc, ySrc): + err_code = _sdk.CameraImage_BitBltToDC(c_void_p(Image), c_void_p(hDC), xDst, yDst, cxDst, cyDst, xSrc, ySrc) + SetLastError(err_code) + return err_code + +def CameraImage_SaveAsBmp(Image, FileName): + err_code = _sdk.CameraImage_SaveAsBmp(c_void_p(Image), _str_to_string_buffer(FileName)) + SetLastError(err_code) + return err_code + +def CameraImage_SaveAsJpeg(Image, FileName, Quality): + err_code = _sdk.CameraImage_SaveAsJpeg(c_void_p(Image), _str_to_string_buffer(FileName), Quality) + SetLastError(err_code) + return err_code + +def CameraImage_SaveAsPng(Image, FileName): + err_code = _sdk.CameraImage_SaveAsPng(c_void_p(Image), _str_to_string_buffer(FileName)) + SetLastError(err_code) + return err_code + +def CameraImage_SaveAsRaw(Image, FileName, Format): + err_code = _sdk.CameraImage_SaveAsRaw(c_void_p(Image), _str_to_string_buffer(FileName), Format) + SetLastError(err_code) + return err_code + +def CameraImage_IPicture(Image): + NewPic = c_void_p() + err_code = _sdk.CameraImage_IPicture(c_void_p(Image), byref(NewPic)) + SetLastError(err_code) + return NewPic.value diff --git a/api/config.json b/api/config.json new file mode 100644 index 0000000..4f258b4 --- /dev/null +++ b/api/config.json @@ -0,0 +1,92 @@ +{ + "mqtt": { + "broker_host": "192.168.1.110", + "broker_port": 1883, + "username": null, + "password": null, + "topics": { + "vibratory_conveyor": "vision/vibratory_conveyor/state", + "blower_separator": "vision/blower_separator/state" + } + }, + "storage": { + "base_path": "/storage", + "max_file_size_mb": 1000, + "max_recording_duration_minutes": 60, + "cleanup_older_than_days": 30 + }, + "system": { + "camera_check_interval_seconds": 2, + "log_level": "DEBUG", + "log_file": "usda_vision_system.log", + "api_host": "0.0.0.0", + "api_port": 8000, + "enable_api": true, + "timezone": "America/New_York", + "auto_recording_enabled": true + }, + "cameras": [ + { + "name": "camera1", + "machine_topic": "blower_separator", + "storage_path": "/storage/camera1", + "exposure_ms": 0.3, + "gain": 4.0, + "target_fps": 0, + "enabled": true, + "video_format": "mp4", + "video_codec": "mp4v", + "video_quality": 95, + "auto_start_recording_enabled": true, + "auto_recording_max_retries": 3, + "auto_recording_retry_delay_seconds": 2, + "sharpness": 0, + "contrast": 100, + "saturation": 100, + "gamma": 100, + "noise_filter_enabled": false, + "denoise_3d_enabled": false, + "auto_white_balance": false, + "color_temperature_preset": 0, + "wb_red_gain": 0.94, + "wb_green_gain": 1.0, + "wb_blue_gain": 0.87, + "anti_flicker_enabled": false, + "light_frequency": 0, + "bit_depth": 8, + "hdr_enabled": false, + "hdr_gain_mode": 2 + }, + { + "name": "camera2", + "machine_topic": "vibratory_conveyor", + "storage_path": "/storage/camera2", + "exposure_ms": 0.2, + "gain": 2.0, + "target_fps": 0, + "enabled": true, + "video_format": "mp4", + "video_codec": "mp4v", + "video_quality": 95, + "auto_start_recording_enabled": true, + "auto_recording_max_retries": 3, + "auto_recording_retry_delay_seconds": 2, + "sharpness": 0, + "contrast": 100, + "saturation": 100, + "gamma": 100, + "noise_filter_enabled": false, + "denoise_3d_enabled": false, + "auto_white_balance": false, + "color_temperature_preset": 0, + "wb_red_gain": 1.01, + "wb_green_gain": 1.0, + "wb_blue_gain": 0.87, + "anti_flicker_enabled": false, + "light_frequency": 0, + "bit_depth": 8, + "hdr_enabled": false, + "hdr_gain_mode": 0 + } + ] +} \ No newline at end of file diff --git a/api/container_init.sh b/api/container_init.sh new file mode 100755 index 0000000..f7c792d --- /dev/null +++ b/api/container_init.sh @@ -0,0 +1,29 @@ +#!/bin/bash + +# Container initialization script for USDA Vision Camera System +# This script sets up and starts the systemd service in a container environment + +echo "🐳 Container Init - USDA Vision Camera System" +echo "=============================================" + +# Start systemd if not already running (for containers) +if ! pgrep systemd > /dev/null; then + echo "🔧 Starting systemd..." + exec /sbin/init & + sleep 5 +fi + +# Setup the service if not already installed +if [ ! -f "/etc/systemd/system/usda-vision-camera.service" ]; then + echo "📦 Setting up USDA Vision Camera service..." + cd /home/alireza/USDA-vision-cameras + sudo ./setup_service.sh +fi + +# Start the service +echo "🚀 Starting USDA Vision Camera service..." +sudo systemctl start usda-vision-camera + +# Follow the logs +echo "📋 Following service logs (Ctrl+C to exit)..." +sudo journalctl -u usda-vision-camera -f diff --git a/api/convert_avi_to_mp4.sh b/api/convert_avi_to_mp4.sh new file mode 100755 index 0000000..4be2d0c --- /dev/null +++ b/api/convert_avi_to_mp4.sh @@ -0,0 +1,182 @@ +#!/bin/bash + +# Script to convert AVI files to MP4 using H.264 codec +# Converts files in /storage directory and saves them in the same location + +# Colors for output +RED='\033[0;31m' +GREEN='\033[0;32m' +YELLOW='\033[1;33m' +BLUE='\033[0;34m' +NC='\033[0m' # No Color + +# Function to print colored output +print_status() { + echo -e "${BLUE}[INFO]${NC} $1" +} + +print_success() { + echo -e "${GREEN}[SUCCESS]${NC} $1" +} + +print_warning() { + echo -e "${YELLOW}[WARNING]${NC} $1" +} + +print_error() { + echo -e "${RED}[ERROR]${NC} $1" +} + +# Function to get video duration in seconds +get_duration() { + local file="$1" + ffprobe -v quiet -show_entries format=duration -of default=noprint_wrappers=1:nokey=1 "$file" 2>/dev/null | cut -d. -f1 +} + +# Function to show progress bar +show_progress() { + local current=$1 + local total=$2 + local width=50 + local percentage=$((current * 100 / total)) + local filled=$((current * width / total)) + local empty=$((width - filled)) + + printf "\r[" + printf "%*s" $filled | tr ' ' '=' + printf "%*s" $empty | tr ' ' '-' + printf "] %d%% (%ds/%ds)" $percentage $current $total +} + +# Check if ffmpeg is installed +if ! command -v ffmpeg &> /dev/null; then + print_error "ffmpeg is not installed. Please install ffmpeg first." + exit 1 +fi + +# Check if /storage directory exists +if [ ! -d "/storage" ]; then + print_error "/storage directory does not exist." + exit 1 +fi + +# Check if we have read/write permissions to /storage +if [ ! -r "/storage" ] || [ ! -w "/storage" ]; then + print_error "No read/write permissions for /storage directory." + exit 1 +fi + +print_status "Starting AVI to MP4 conversion in /storage directory..." + +# Counter variables +total_files=0 +converted_files=0 +skipped_files=0 +failed_files=0 + +# Find all AVI files in /storage directory (including subdirectories) +while IFS= read -r -d '' avi_file; do + total_files=$((total_files + 1)) + + # Get the directory and filename without extension + dir_path=$(dirname "$avi_file") + filename=$(basename "$avi_file" .avi) + mp4_file="$dir_path/$filename.mp4" + + print_status "Processing: $avi_file" + + # Check if MP4 file already exists + if [ -f "$mp4_file" ]; then + print_warning "MP4 file already exists: $mp4_file (skipping)" + skipped_files=$((skipped_files + 1)) + continue + fi + + # Get video duration for progress calculation + duration=$(get_duration "$avi_file") + if [ -z "$duration" ] || [ "$duration" -eq 0 ]; then + print_warning "Could not determine video duration, converting without progress bar..." + # Fallback to simple conversion without progress + if ffmpeg -i "$avi_file" -c:v libx264 -c:a aac -preset medium -crf 18 -nostdin "$mp4_file" -y 2>/dev/null; then + echo + print_success "Converted: $avi_file -> $mp4_file" + converted_files=$((converted_files + 1)) + else + echo + print_error "Failed to convert: $avi_file" + failed_files=$((failed_files + 1)) + fi + continue + fi + + # Convert AVI to MP4 using H.264 codec with 95% quality (CRF 18) and show progress + echo "Converting... (Duration: ${duration}s)" + + # Create a temporary file for ffmpeg progress + progress_file=$(mktemp) + + # Start ffmpeg conversion in background with progress output + ffmpeg -i "$avi_file" -c:v libx264 -c:a aac -preset medium -crf 18 \ + -progress "$progress_file" -nostats -loglevel 0 -nostdin "$mp4_file" -y & + + ffmpeg_pid=$! + + # Monitor progress + while kill -0 $ffmpeg_pid 2>/dev/null; do + if [ -f "$progress_file" ]; then + # Extract current time from progress file + current_time=$(tail -n 10 "$progress_file" 2>/dev/null | grep "out_time_ms=" | tail -n 1 | cut -d= -f2) + if [ -n "$current_time" ] && [ "$current_time" != "N/A" ]; then + # Convert microseconds to seconds + current_seconds=$((current_time / 1000000)) + if [ "$current_seconds" -gt 0 ] && [ "$current_seconds" -le "$duration" ]; then + show_progress $current_seconds $duration + fi + fi + fi + sleep 0.5 + done + + # Wait for ffmpeg to complete and get exit status + wait $ffmpeg_pid + ffmpeg_exit_code=$? + + # Clean up progress file + rm -f "$progress_file" + + # Check if conversion was successful + if [ $ffmpeg_exit_code -eq 0 ] && [ -f "$mp4_file" ]; then + show_progress $duration $duration # Show 100% completion + echo + print_success "Converted: $avi_file -> $mp4_file" + converted_files=$((converted_files + 1)) + + # Optional: Remove original AVI file (uncomment the next line if you want this) + # rm "$avi_file" + else + echo + print_error "Failed to convert: $avi_file" + failed_files=$((failed_files + 1)) + # Clean up incomplete file + [ -f "$mp4_file" ] && rm "$mp4_file" + fi + + echo # Add blank line between files + +done < <(find /storage -name "*.avi" -type f -print0) + +# Print summary +echo +print_status "=== CONVERSION SUMMARY ===" +echo "Total AVI files found: $total_files" +echo "Successfully converted: $converted_files" +echo "Skipped (MP4 exists): $skipped_files" +echo "Failed conversions: $failed_files" + +if [ $total_files -eq 0 ]; then + print_warning "No AVI files found in /storage directory." +elif [ $failed_files -eq 0 ] && [ $converted_files -gt 0 ]; then + print_success "All conversions completed successfully!" +elif [ $failed_files -gt 0 ]; then + print_warning "Some conversions failed. Check the output above for details." +fi diff --git a/api/docs/AI_AGENT_VIDEO_INTEGRATION_GUIDE.md b/api/docs/AI_AGENT_VIDEO_INTEGRATION_GUIDE.md new file mode 100644 index 0000000..8901049 --- /dev/null +++ b/api/docs/AI_AGENT_VIDEO_INTEGRATION_GUIDE.md @@ -0,0 +1,415 @@ +# 🤖 AI Agent Video Integration Guide + +This guide provides comprehensive step-by-step instructions for AI agents and external systems to successfully integrate with the USDA Vision Camera System's video streaming functionality. + +## 🎯 Overview + +The USDA Vision Camera System provides a complete video streaming API that allows AI agents to: +- Browse and select videos from multiple cameras +- Stream videos with seeking capabilities +- Generate thumbnails for preview +- Access video metadata and technical information + +## 🔗 API Base Configuration + +### Connection Details +```bash +# Default API Base URL +API_BASE_URL="http://localhost:8000" + +# For remote access, replace with actual server IP/hostname +API_BASE_URL="http://192.168.1.100:8000" +``` + +### Authentication +**⚠️ IMPORTANT: No authentication is currently required.** +- All endpoints are publicly accessible +- No API keys or tokens needed +- CORS is enabled for web browser integration + +## 📋 Step-by-Step Integration Workflow + +### Step 1: Verify System Connectivity +```bash +# Test basic connectivity +curl -f "${API_BASE_URL}/health" || echo "❌ System not accessible" + +# Check system status +curl "${API_BASE_URL}/system/status" +``` + +**Expected Response:** +```json +{ + "status": "healthy", + "timestamp": "2025-08-05T10:30:00Z" +} +``` + +### Step 2: List Available Videos +```bash +# Get all videos with metadata +curl "${API_BASE_URL}/videos/?include_metadata=true&limit=50" + +# Filter by specific camera +curl "${API_BASE_URL}/videos/?camera_name=camera1&include_metadata=true" + +# Filter by date range +curl "${API_BASE_URL}/videos/?start_date=2025-08-04T00:00:00&end_date=2025-08-05T23:59:59" +``` + +**Response Structure:** +```json +{ + "videos": [ + { + "file_id": "camera1_auto_blower_separator_20250804_143022.mp4", + "camera_name": "camera1", + "filename": "camera1_auto_blower_separator_20250804_143022.mp4", + "file_size_bytes": 31457280, + "format": "mp4", + "status": "completed", + "created_at": "2025-08-04T14:30:22", + "start_time": "2025-08-04T14:30:22", + "end_time": "2025-08-04T14:32:22", + "machine_trigger": "blower_separator", + "is_streamable": true, + "needs_conversion": false, + "metadata": { + "duration_seconds": 120.5, + "width": 1920, + "height": 1080, + "fps": 30.0, + "codec": "mp4v", + "bitrate": 5000000, + "aspect_ratio": 1.777 + } + } + ], + "total_count": 1 +} +``` + +### Step 3: Select and Validate Video +```bash +# Get detailed video information +FILE_ID="camera1_auto_blower_separator_20250804_143022.mp4" +curl "${API_BASE_URL}/videos/${FILE_ID}" + +# Validate video is playable +curl -X POST "${API_BASE_URL}/videos/${FILE_ID}/validate" + +# Get streaming technical details +curl "${API_BASE_URL}/videos/${FILE_ID}/info" +``` + +### Step 4: Generate Video Thumbnail +```bash +# Generate thumbnail at 5 seconds, 320x240 resolution +curl "${API_BASE_URL}/videos/${FILE_ID}/thumbnail?timestamp=5.0&width=320&height=240" \ + --output "thumbnail_${FILE_ID}.jpg" + +# Generate multiple thumbnails for preview +for timestamp in 1 30 60 90; do + curl "${API_BASE_URL}/videos/${FILE_ID}/thumbnail?timestamp=${timestamp}&width=160&height=120" \ + --output "preview_${timestamp}s.jpg" +done +``` + +### Step 5: Stream Video Content +```bash +# Stream entire video +curl "${API_BASE_URL}/videos/${FILE_ID}/stream" --output "video.mp4" + +# Stream specific byte range (for seeking) +curl -H "Range: bytes=0-1048575" \ + "${API_BASE_URL}/videos/${FILE_ID}/stream" \ + --output "video_chunk.mp4" + +# Test range request support +curl -I -H "Range: bytes=0-1023" \ + "${API_BASE_URL}/videos/${FILE_ID}/stream" +``` + +## 🔧 Programming Language Examples + +### Python Integration +```python +import requests +import json +from typing import List, Dict, Optional + +class USDAVideoClient: + def __init__(self, base_url: str = "http://localhost:8000"): + self.base_url = base_url.rstrip('/') + self.session = requests.Session() + + def list_videos(self, camera_name: Optional[str] = None, + include_metadata: bool = True, limit: int = 50) -> Dict: + """List available videos with optional filtering.""" + params = { + 'include_metadata': include_metadata, + 'limit': limit + } + if camera_name: + params['camera_name'] = camera_name + + response = self.session.get(f"{self.base_url}/videos/", params=params) + response.raise_for_status() + return response.json() + + def get_video_info(self, file_id: str) -> Dict: + """Get detailed video information.""" + response = self.session.get(f"{self.base_url}/videos/{file_id}") + response.raise_for_status() + return response.json() + + def get_thumbnail(self, file_id: str, timestamp: float = 1.0, + width: int = 320, height: int = 240) -> bytes: + """Generate and download video thumbnail.""" + params = { + 'timestamp': timestamp, + 'width': width, + 'height': height + } + response = self.session.get( + f"{self.base_url}/videos/{file_id}/thumbnail", + params=params + ) + response.raise_for_status() + return response.content + + def stream_video_range(self, file_id: str, start_byte: int, + end_byte: int) -> bytes: + """Stream specific byte range of video.""" + headers = {'Range': f'bytes={start_byte}-{end_byte}'} + response = self.session.get( + f"{self.base_url}/videos/{file_id}/stream", + headers=headers + ) + response.raise_for_status() + return response.content + + def validate_video(self, file_id: str) -> bool: + """Validate that video is accessible and playable.""" + response = self.session.post(f"{self.base_url}/videos/{file_id}/validate") + response.raise_for_status() + return response.json().get('is_valid', False) + +# Usage example +client = USDAVideoClient("http://192.168.1.100:8000") + +# List videos from camera1 +videos = client.list_videos(camera_name="camera1") +print(f"Found {videos['total_count']} videos") + +# Select first video +if videos['videos']: + video = videos['videos'][0] + file_id = video['file_id'] + + # Validate video + if client.validate_video(file_id): + print(f"✅ Video {file_id} is valid") + + # Get thumbnail + thumbnail = client.get_thumbnail(file_id, timestamp=5.0) + with open(f"thumbnail_{file_id}.jpg", "wb") as f: + f.write(thumbnail) + + # Stream first 1MB + chunk = client.stream_video_range(file_id, 0, 1048575) + print(f"Downloaded {len(chunk)} bytes") +``` + +### JavaScript/Node.js Integration +```javascript +class USDAVideoClient { + constructor(baseUrl = 'http://localhost:8000') { + this.baseUrl = baseUrl.replace(/\/$/, ''); + } + + async listVideos(options = {}) { + const params = new URLSearchParams({ + include_metadata: options.includeMetadata || true, + limit: options.limit || 50 + }); + + if (options.cameraName) { + params.append('camera_name', options.cameraName); + } + + const response = await fetch(`${this.baseUrl}/videos/?${params}`); + if (!response.ok) throw new Error(`HTTP ${response.status}`); + return response.json(); + } + + async getVideoInfo(fileId) { + const response = await fetch(`${this.baseUrl}/videos/${fileId}`); + if (!response.ok) throw new Error(`HTTP ${response.status}`); + return response.json(); + } + + async getThumbnail(fileId, options = {}) { + const params = new URLSearchParams({ + timestamp: options.timestamp || 1.0, + width: options.width || 320, + height: options.height || 240 + }); + + const response = await fetch( + `${this.baseUrl}/videos/${fileId}/thumbnail?${params}` + ); + if (!response.ok) throw new Error(`HTTP ${response.status}`); + return response.blob(); + } + + async validateVideo(fileId) { + const response = await fetch( + `${this.baseUrl}/videos/${fileId}/validate`, + { method: 'POST' } + ); + if (!response.ok) throw new Error(`HTTP ${response.status}`); + const result = await response.json(); + return result.is_valid; + } + + getStreamUrl(fileId) { + return `${this.baseUrl}/videos/${fileId}/stream`; + } +} + +// Usage example +const client = new USDAVideoClient('http://192.168.1.100:8000'); + +async function integrateWithVideos() { + try { + // List videos + const videos = await client.listVideos({ cameraName: 'camera1' }); + console.log(`Found ${videos.total_count} videos`); + + if (videos.videos.length > 0) { + const video = videos.videos[0]; + const fileId = video.file_id; + + // Validate video + const isValid = await client.validateVideo(fileId); + if (isValid) { + console.log(`✅ Video ${fileId} is valid`); + + // Get thumbnail + const thumbnail = await client.getThumbnail(fileId, { + timestamp: 5.0, + width: 320, + height: 240 + }); + + // Create video element for playback + const videoElement = document.createElement('video'); + videoElement.controls = true; + videoElement.src = client.getStreamUrl(fileId); + document.body.appendChild(videoElement); + } + } + } catch (error) { + console.error('Integration error:', error); + } +} +``` + +## 🚨 Error Handling + +### Common HTTP Status Codes +```bash +# Success responses +200 # OK - Request successful +206 # Partial Content - Range request successful + +# Client error responses +400 # Bad Request - Invalid parameters +404 # Not Found - Video file doesn't exist +416 # Range Not Satisfiable - Invalid range request + +# Server error responses +500 # Internal Server Error - Failed to process video +503 # Service Unavailable - Video module not available +``` + +### Error Response Format +```json +{ + "detail": "Video camera1_recording_20250804_143022.avi not found" +} +``` + +### Robust Error Handling Example +```python +def safe_video_operation(client, file_id): + try: + # Validate video first + if not client.validate_video(file_id): + return {"error": "Video is not valid or accessible"} + + # Get video info + video_info = client.get_video_info(file_id) + + # Check if streamable + if not video_info.get('is_streamable', False): + return {"error": "Video is not streamable"} + + return {"success": True, "video_info": video_info} + + except requests.exceptions.HTTPError as e: + if e.response.status_code == 404: + return {"error": "Video not found"} + elif e.response.status_code == 416: + return {"error": "Invalid range request"} + else: + return {"error": f"HTTP error: {e.response.status_code}"} + except requests.exceptions.ConnectionError: + return {"error": "Cannot connect to video server"} + except Exception as e: + return {"error": f"Unexpected error: {str(e)}"} +``` + +## ✅ Integration Checklist + +### Pre-Integration +- [ ] Verify network connectivity to USDA Vision Camera System +- [ ] Test basic API endpoints (`/health`, `/system/status`) +- [ ] Understand video file naming conventions +- [ ] Plan error handling strategy + +### Video Selection +- [ ] Implement video listing with appropriate filters +- [ ] Add video validation before processing +- [ ] Handle pagination for large video collections +- [ ] Implement caching for video metadata + +### Video Playback +- [ ] Test video streaming with range requests +- [ ] Implement thumbnail generation for previews +- [ ] Add progress tracking for video playback +- [ ] Handle different video formats (MP4, AVI) + +### Error Handling +- [ ] Handle network connectivity issues +- [ ] Manage video not found scenarios +- [ ] Deal with invalid range requests +- [ ] Implement retry logic for transient failures + +### Performance +- [ ] Use range requests for efficient seeking +- [ ] Implement client-side caching where appropriate +- [ ] Monitor bandwidth usage for video streaming +- [ ] Consider thumbnail caching for better UX + +## 🎯 Next Steps + +1. **Test Integration**: Use the provided examples to test basic connectivity +2. **Implement Error Handling**: Add robust error handling for production use +3. **Optimize Performance**: Implement caching and efficient streaming +4. **Monitor Usage**: Track API usage and performance metrics +5. **Security Review**: Consider authentication if exposing externally + +This guide provides everything needed for successful integration with the USDA Vision Camera System's video streaming functionality. The system is designed to be simple and reliable for AI agents and external systems to consume video content efficiently. diff --git a/api/docs/API_CHANGES_SUMMARY.md b/api/docs/API_CHANGES_SUMMARY.md new file mode 100644 index 0000000..d7af414 --- /dev/null +++ b/api/docs/API_CHANGES_SUMMARY.md @@ -0,0 +1,207 @@ +# API Changes Summary: Camera Settings and Video Format Updates + +## Overview +This document tracks major API changes including camera settings enhancements and the MP4 video format update. + +## 🎥 Latest Update: MP4 Video Format (v2.1) +**Date**: August 2025 + +**Major Changes**: +- **Video Format**: Changed from AVI/XVID to MP4/MPEG-4 format +- **File Extensions**: New recordings use `.mp4` instead of `.avi` +- **File Size**: ~40% reduction in file sizes +- **Streaming**: Better web browser compatibility + +**New Configuration Fields**: +```json +{ + "video_format": "mp4", // File format: "mp4" or "avi" + "video_codec": "mp4v", // Video codec: "mp4v", "XVID", "MJPG" + "video_quality": 95 // Quality: 0-100 (higher = better) +} +``` + +**Frontend Impact**: +- ✅ Better streaming performance and browser support +- ✅ Smaller file sizes for faster transfers +- ✅ Universal HTML5 video player compatibility +- ✅ Backward compatible with existing AVI files + +**Documentation**: See [MP4 Format Update Guide](MP4_FORMAT_UPDATE.md) + +--- + +## Previous Changes: Camera Settings and Filename Handling + +Enhanced the `POST /cameras/{camera_name}/start-recording` API endpoint to accept optional camera settings (shutter speed/exposure, gain, and fps) and ensure all filenames have datetime prefixes. + +## Changes Made + +### 1. API Models (`usda_vision_system/api/models.py`) +- **Enhanced `StartRecordingRequest`** to include optional parameters: + - `exposure_ms: Optional[float]` - Exposure time in milliseconds + - `gain: Optional[float]` - Camera gain value + - `fps: Optional[float]` - Target frames per second + +### 2. Camera Recorder (`usda_vision_system/camera/recorder.py`) +- **Added `update_camera_settings()` method** to dynamically update camera settings: + - Updates exposure time using `mvsdk.CameraSetExposureTime()` + - Updates gain using `mvsdk.CameraSetAnalogGain()` + - Updates target FPS in camera configuration + - Logs all setting changes + - Returns boolean indicating success/failure + +### 3. Camera Manager (`usda_vision_system/camera/manager.py`) +- **Enhanced `manual_start_recording()` method** to accept new parameters: + - Added optional `exposure_ms`, `gain`, and `fps` parameters + - Calls `update_camera_settings()` if any settings are provided + - **Automatic datetime prefix**: Always prepends timestamp to filename + - If custom filename provided: `{timestamp}_{custom_filename}` + - If no filename provided: `{camera_name}_manual_{timestamp}.avi` + +### 4. API Server (`usda_vision_system/api/server.py`) +- **Updated start-recording endpoint** to: + - Pass new camera settings to camera manager + - Handle filename response with datetime prefix + - Maintain backward compatibility with existing requests + +### 5. API Tests (`api-tests.http`) +- **Added comprehensive test examples**: + - Basic recording (existing functionality) + - Recording with camera settings + - Recording with settings only (no filename) + - Different parameter combinations + +## Usage Examples + +### Basic Recording (unchanged) +```http +POST http://localhost:8000/cameras/camera1/start-recording +Content-Type: application/json + +{ + "camera_name": "camera1", + "filename": "test.avi" +} +``` +**Result**: File saved as `20241223_143022_test.avi` + +### Recording with Camera Settings +```http +POST http://localhost:8000/cameras/camera1/start-recording +Content-Type: application/json + +{ + "camera_name": "camera1", + "filename": "high_quality.avi", + "exposure_ms": 2.0, + "gain": 4.0, + "fps": 5.0 +} +``` +**Result**: +- Camera settings updated before recording +- File saved as `20241223_143022_high_quality.avi` + +### Maximum FPS Recording +```http +POST http://localhost:8000/cameras/camera1/start-recording +Content-Type: application/json + +{ + "camera_name": "camera1", + "filename": "max_speed.avi", + "exposure_ms": 0.1, + "gain": 1.0, + "fps": 0 +} +``` +**Result**: +- Camera captures at maximum possible speed (no delay between frames) +- Video file saved with 30 FPS metadata for proper playback +- Actual capture rate depends on camera hardware and exposure settings + +### Settings Only (no filename) +```http +POST http://localhost:8000/cameras/camera1/start-recording +Content-Type: application/json + +{ + "camera_name": "camera1", + "exposure_ms": 1.5, + "gain": 3.0, + "fps": 7.0 +} +``` +**Result**: +- Camera settings updated +- File saved as `camera1_manual_20241223_143022.avi` + +## Key Features + +### 1. **Backward Compatibility** +- All existing API calls continue to work unchanged +- New parameters are optional +- Default behavior preserved when no settings provided + +### 2. **Automatic Datetime Prefix** +- **ALL filenames now have datetime prefix** regardless of what's sent +- Format: `YYYYMMDD_HHMMSS_` (Atlanta timezone) +- Ensures unique filenames and chronological ordering + +### 3. **Dynamic Camera Settings** +- Settings can be changed per recording without restarting system +- Based on proven implementation from `old tests/camera_video_recorder.py` +- Proper error handling and logging + +### 4. **Maximum FPS Capture** +- **`fps: 0`** = Capture at maximum possible speed (no delay between frames) +- **`fps > 0`** = Capture at specified frame rate with controlled timing +- **`fps` omitted** = Uses camera config default (usually 3.0 fps) +- Video files saved with 30 FPS metadata when fps=0 for proper playback + +### 5. **Parameter Validation** +- Uses Pydantic models for automatic validation +- Optional parameters with proper type checking +- Descriptive field documentation + +## Testing + +Run the test script to verify functionality: +```bash +# Start the system first +python main.py + +# In another terminal, run tests +python test_api_changes.py +``` + +The test script verifies: +- Basic recording functionality +- Camera settings application +- Filename datetime prefix handling +- API response accuracy + +## Implementation Notes + +### Camera Settings Mapping +- **Exposure**: Converted from milliseconds to microseconds for SDK +- **Gain**: Converted to camera units (multiplied by 100) +- **FPS**: Stored in camera config, used by recording loop + +### Error Handling +- Settings update failures are logged but don't prevent recording +- Invalid camera names return appropriate HTTP errors +- Camera initialization failures are handled gracefully + +### Filename Generation +- Uses `format_filename_timestamp()` from timezone utilities +- Ensures Atlanta timezone consistency +- Handles both custom and auto-generated filenames + +## Similar to Old Implementation +The camera settings functionality mirrors the proven approach in `old tests/camera_video_recorder.py`: +- Same parameter names and ranges +- Same SDK function calls +- Same conversion factors +- Proven to work with the camera hardware diff --git a/api/docs/API_DOCUMENTATION.md b/api/docs/API_DOCUMENTATION.md new file mode 100644 index 0000000..81ac03f --- /dev/null +++ b/api/docs/API_DOCUMENTATION.md @@ -0,0 +1,824 @@ +# 🚀 USDA Vision Camera System - Complete API Documentation + +This document provides comprehensive documentation for all API endpoints in the USDA Vision Camera System, including recent enhancements and new features. + +## 📋 Table of Contents + +- [🔧 System Status & Health](#-system-status--health) +- [📷 Camera Management](#-camera-management) +- [🎥 Recording Control](#-recording-control) +- [🤖 Auto-Recording Management](#-auto-recording-management) +- [🎛️ Camera Configuration](#️-camera-configuration) +- [📡 MQTT & Machine Status](#-mqtt--machine-status) +- [💾 Storage & File Management](#-storage--file-management) +- [🔄 Camera Recovery & Diagnostics](#-camera-recovery--diagnostics) +- [📺 Live Streaming](#-live-streaming) +- [🎬 Video Streaming & Playback](#-video-streaming--playback) +- [🌐 WebSocket Real-time Updates](#-websocket-real-time-updates) + +## 🔧 System Status & Health + +### Get System Status +```http +GET /system/status +``` +**Response**: `SystemStatusResponse` +```json +{ + "system_started": true, + "mqtt_connected": true, + "last_mqtt_message": "2024-01-15T10:30:00Z", + "machines": { + "vibratory_conveyor": { + "name": "vibratory_conveyor", + "state": "ON", + "last_updated": "2024-01-15T10:30:00Z" + } + }, + "cameras": { + "camera1": { + "name": "camera1", + "status": "ACTIVE", + "is_recording": false, + "auto_recording_enabled": true + } + }, + "active_recordings": 0, + "total_recordings": 15, + "uptime_seconds": 3600.5 +} +``` + +### Health Check +```http +GET /health +``` +**Response**: Simple health status +```json +{ + "status": "healthy", + "timestamp": "2024-01-15T10:30:00Z" +} +``` + +## 📷 Camera Management + +### Get All Cameras +```http +GET /cameras +``` +**Response**: `Dict[str, CameraStatusResponse]` + +### Get Specific Camera Status +```http +GET /cameras/{camera_name}/status +``` +**Response**: `CameraStatusResponse` +```json +{ + "name": "camera1", + "status": "ACTIVE", + "is_recording": false, + "last_checked": "2024-01-15T10:30:00Z", + "last_error": null, + "device_info": { + "model": "GigE Camera", + "serial": "12345" + }, + "current_recording_file": null, + "recording_start_time": null, + "auto_recording_enabled": true, + "auto_recording_active": false, + "auto_recording_failure_count": 0, + "auto_recording_last_attempt": null, + "auto_recording_last_error": null +} +``` + +## 🎥 Recording Control + +### Start Recording +```http +POST /cameras/{camera_name}/start-recording +Content-Type: application/json + +{ + "filename": "test_recording.avi", + "exposure_ms": 2.0, + "gain": 4.0, + "fps": 5.0 +} +``` + +**Request Model**: `StartRecordingRequest` +- `filename` (optional): Custom filename (datetime prefix will be added automatically) +- `exposure_ms` (optional): Exposure time in milliseconds +- `gain` (optional): Camera gain value +- `fps` (optional): Target frames per second + +**Response**: `StartRecordingResponse` +```json +{ + "success": true, + "message": "Recording started for camera1", + "filename": "20240115_103000_test_recording.avi" +} +``` + +**Key Features**: +- ✅ **Automatic datetime prefix**: All filenames get `YYYYMMDD_HHMMSS_` prefix +- ✅ **Dynamic camera settings**: Adjust exposure, gain, and FPS per recording +- ✅ **Backward compatibility**: All existing API calls work unchanged + +### Stop Recording +```http +POST /cameras/{camera_name}/stop-recording +``` +**Response**: `StopRecordingResponse` +```json +{ + "success": true, + "message": "Recording stopped for camera1", + "duration_seconds": 45.2 +} +``` + +## 🤖 Auto-Recording Management + +### Enable Auto-Recording for Camera +```http +POST /cameras/{camera_name}/auto-recording/enable +``` +**Response**: `AutoRecordingConfigResponse` +```json +{ + "success": true, + "message": "Auto-recording enabled for camera1", + "camera_name": "camera1", + "enabled": true +} +``` + +### Disable Auto-Recording for Camera +```http +POST /cameras/{camera_name}/auto-recording/disable +``` +**Response**: `AutoRecordingConfigResponse` + +### Get Auto-Recording Status +```http +GET /auto-recording/status +``` +**Response**: `AutoRecordingStatusResponse` +```json +{ + "running": true, + "auto_recording_enabled": true, + "retry_queue": {}, + "enabled_cameras": ["camera1", "camera2"] +} +``` + +**Auto-Recording Features**: +- 🤖 **MQTT-triggered recording**: Automatically starts/stops based on machine state +- 🔄 **Retry logic**: Failed recordings are retried with configurable delays +- 📊 **Per-camera control**: Enable/disable auto-recording individually +- 📈 **Status tracking**: Monitor failure counts and last attempts + +## 🎛️ Camera Configuration + +### Get Camera Configuration +```http +GET /cameras/{camera_name}/config +``` +**Response**: `CameraConfigResponse` +```json +{ + "name": "camera1", + "machine_topic": "blower_separator", + "storage_path": "/storage/camera1", + "exposure_ms": 0.3, + "gain": 4.0, + "target_fps": 0, + "enabled": true, + "video_format": "mp4", + "video_codec": "mp4v", + "video_quality": 95, + "auto_start_recording_enabled": true, + "auto_recording_max_retries": 3, + "auto_recording_retry_delay_seconds": 2, + "contrast": 100, + "saturation": 100, + "gamma": 100, + "noise_filter_enabled": false, + "denoise_3d_enabled": false, + "auto_white_balance": false, + "color_temperature_preset": 0, + "wb_red_gain": 0.94, + "wb_green_gain": 1.0, + "wb_blue_gain": 0.87, + "anti_flicker_enabled": false, + "light_frequency": 0, + "bit_depth": 8, + "hdr_enabled": false, + "hdr_gain_mode": 2 +} +``` + +### Update Camera Configuration +```http +PUT /cameras/{camera_name}/config +Content-Type: application/json + +{ + "exposure_ms": 2.0, + "gain": 4.0, + "target_fps": 5.0, + "sharpness": 130 +} +``` + +### Apply Configuration (Restart Required) +```http +POST /cameras/{camera_name}/apply-config +``` + +**Configuration Categories**: +- ✅ **Real-time**: `exposure_ms`, `gain`, `target_fps`, `sharpness`, `contrast`, etc. +- ⚠️ **Restart required**: `noise_filter_enabled`, `denoise_3d_enabled`, `bit_depth`, `video_format`, `video_codec`, `video_quality` + +For detailed configuration options, see [Camera Configuration API Guide](api/CAMERA_CONFIG_API.md). + +## 📡 MQTT & Machine Status + +### Get All Machines +```http +GET /machines +``` +**Response**: `Dict[str, MachineStatusResponse]` + +### Get MQTT Status +```http +GET /mqtt/status +``` +**Response**: `MQTTStatusResponse` +```json +{ + "connected": true, + "broker_host": "192.168.1.110", + "broker_port": 1883, + "subscribed_topics": ["vibratory_conveyor", "blower_separator"], + "last_message_time": "2024-01-15T10:30:00Z", + "message_count": 1250, + "error_count": 2, + "uptime_seconds": 3600.5 +} +``` + +### Get MQTT Events History +```http +GET /mqtt/events?limit=10 +``` +**Response**: `MQTTEventsHistoryResponse` +```json +{ + "events": [ + { + "machine_name": "vibratory_conveyor", + "topic": "vibratory_conveyor", + "payload": "ON", + "normalized_state": "ON", + "timestamp": "2024-01-15T10:30:00Z", + "message_number": 1250 + } + ], + "total_events": 1250, + "last_updated": "2024-01-15T10:30:00Z" +} +``` + +## 💾 Storage & File Management + +### Get Storage Statistics +```http +GET /storage/stats +``` +**Response**: `StorageStatsResponse` +```json +{ + "base_path": "/storage", + "total_files": 150, + "total_size_bytes": 5368709120, + "cameras": { + "camera1": { + "file_count": 75, + "total_size_bytes": 2684354560 + }, + "camera2": { + "file_count": 75, + "total_size_bytes": 2684354560 + } + }, + "disk_usage": { + "total_bytes": 107374182400, + "used_bytes": 53687091200, + "free_bytes": 53687091200, + "usage_percent": 50.0 + } +} +``` + +### Get File List +```http +POST /storage/files +Content-Type: application/json + +{ + "camera_name": "camera1", + "start_date": "2024-01-15", + "end_date": "2024-01-16", + "limit": 50 +} +``` +**Response**: `FileListResponse` +```json +{ + "files": [ + { + "filename": "20240115_103000_test_recording.avi", + "camera_name": "camera1", + "size_bytes": 52428800, + "created_time": "2024-01-15T10:30:00Z", + "duration_seconds": 30.5 + } + ], + "total_count": 1 +} +``` + +### Cleanup Old Files +```http +POST /storage/cleanup +Content-Type: application/json + +{ + "max_age_days": 30 +} +``` +**Response**: `CleanupResponse` +```json +{ + "files_removed": 25, + "bytes_freed": 1073741824, + "errors": [] +} +``` + +## 🔄 Camera Recovery & Diagnostics + +### Test Camera Connection +```http +POST /cameras/{camera_name}/test-connection +``` +**Response**: `CameraTestResponse` + +### Reconnect Camera +```http +POST /cameras/{camera_name}/reconnect +``` +**Response**: `CameraRecoveryResponse` + +### Restart Camera Grab Process +```http +POST /cameras/{camera_name}/restart-grab +``` +**Response**: `CameraRecoveryResponse` + +### Reset Camera Timestamp +```http +POST /cameras/{camera_name}/reset-timestamp +``` +**Response**: `CameraRecoveryResponse` + +### Full Camera Reset +```http +POST /cameras/{camera_name}/full-reset +``` +**Response**: `CameraRecoveryResponse` + +### Reinitialize Camera +```http +POST /cameras/{camera_name}/reinitialize +``` +**Response**: `CameraRecoveryResponse` + +**Recovery Response Example**: +```json +{ + "success": true, + "message": "Camera camera1 reconnected successfully", + "camera_name": "camera1", + "operation": "reconnect", + "timestamp": "2024-01-15T10:30:00Z" +} +``` + +## 📺 Live Streaming + +### Get Live MJPEG Stream +```http +GET /cameras/{camera_name}/stream +``` +**Response**: MJPEG video stream (multipart/x-mixed-replace) + +### Start Camera Stream +```http +POST /cameras/{camera_name}/start-stream +``` + +### Stop Camera Stream +```http +POST /cameras/{camera_name}/stop-stream +``` + +**Streaming Features**: +- 📺 **MJPEG format**: Compatible with web browsers and React apps +- 🔄 **Concurrent operation**: Stream while recording simultaneously +- ⚡ **Low latency**: Real-time preview for monitoring + +For detailed streaming integration, see [Streaming Guide](guides/STREAMING_GUIDE.md). + +## 🎬 Video Streaming & Playback + +The system includes a comprehensive video streaming module that provides YouTube-like video playback capabilities with HTTP range request support, thumbnail generation, and intelligent caching. + +### List Videos +```http +GET /videos/ +``` +**Query Parameters:** +- `camera_name` (optional): Filter by camera name +- `start_date` (optional): Filter videos created after this date (ISO format) +- `end_date` (optional): Filter videos created before this date (ISO format) +- `limit` (optional): Maximum number of results (default: 50, max: 1000) +- `include_metadata` (optional): Include video metadata (default: false) + +**Response**: `VideoListResponse` +```json +{ + "videos": [ + { + "file_id": "camera1_auto_blower_separator_20250804_143022.mp4", + "camera_name": "camera1", + "filename": "camera1_auto_blower_separator_20250804_143022.mp4", + "file_size_bytes": 31457280, + "format": "mp4", + "status": "completed", + "created_at": "2025-08-04T14:30:22", + "start_time": "2025-08-04T14:30:22", + "end_time": "2025-08-04T14:32:22", + "machine_trigger": "blower_separator", + "is_streamable": true, + "needs_conversion": false, + "metadata": { + "duration_seconds": 120.5, + "width": 1920, + "height": 1080, + "fps": 30.0, + "codec": "mp4v", + "bitrate": 5000000, + "aspect_ratio": 1.777 + } + } + ], + "total_count": 1 +} +``` + +### Get Video Information +```http +GET /videos/{file_id} +``` +**Response**: `VideoInfoResponse` with detailed video information including metadata. + +### Stream Video +```http +GET /videos/{file_id}/stream +``` +**Headers:** +- `Range: bytes=0-1023` (optional): Request specific byte range for seeking + +**Features:** +- ✅ **HTTP Range Requests**: Enables video seeking and progressive download +- ✅ **Partial Content**: Returns 206 status for range requests +- ✅ **Format Conversion**: Automatic AVI to MP4 conversion for web compatibility +- ✅ **Intelligent Caching**: Optimized performance with byte-range caching +- ✅ **CORS Enabled**: Ready for web browser integration + +**Response Headers:** +- `Accept-Ranges: bytes` +- `Content-Length: {size}` +- `Content-Range: bytes {start}-{end}/{total}` (for range requests) +- `Cache-Control: public, max-age=3600` + +### Get Video Thumbnail +```http +GET /videos/{file_id}/thumbnail?timestamp=5.0&width=320&height=240 +``` +**Query Parameters:** +- `timestamp` (optional): Time position in seconds (default: 1.0) +- `width` (optional): Thumbnail width in pixels (default: 320) +- `height` (optional): Thumbnail height in pixels (default: 240) + +**Response**: JPEG image data with caching headers + +### Get Streaming Information +```http +GET /videos/{file_id}/info +``` +**Response**: `StreamingInfoResponse` +```json +{ + "file_id": "camera1_recording_20250804_143022.avi", + "file_size_bytes": 52428800, + "content_type": "video/mp4", + "supports_range_requests": true, + "chunk_size_bytes": 262144 +} +``` + +### Video Validation +```http +POST /videos/{file_id}/validate +``` +**Response**: Validation status and accessibility check +```json +{ + "file_id": "camera1_recording_20250804_143022.avi", + "is_valid": true +} +``` + +### Cache Management +```http +POST /videos/{file_id}/cache/invalidate +``` +**Response**: Cache invalidation status +```json +{ + "file_id": "camera1_recording_20250804_143022.avi", + "cache_invalidated": true +} +``` + +### Admin: Cache Cleanup +```http +POST /admin/videos/cache/cleanup?max_size_mb=100 +``` +**Response**: Cache cleanup results +```json +{ + "cache_cleaned": true, + "entries_removed": 15, + "max_size_mb": 100 +} +``` + +**Video Streaming Features**: +- 🎥 **Multiple Formats**: Native MP4 support with AVI conversion +- 📱 **Web Compatible**: Direct integration with HTML5 video elements +- ⚡ **High Performance**: Intelligent caching and adaptive chunking +- 🖼️ **Thumbnail Generation**: Extract preview images at any timestamp +- 🔄 **Range Requests**: Efficient seeking and progressive download + +## 🌐 WebSocket Real-time Updates + +### Connect to WebSocket +```javascript +const ws = new WebSocket('ws://localhost:8000/ws'); + +ws.onmessage = (event) => { + const update = JSON.parse(event.data); + console.log('Real-time update:', update); +}; +``` + +**WebSocket Message Types**: +- `system_status`: System status changes +- `camera_status`: Camera status updates +- `recording_started`: Recording start events +- `recording_stopped`: Recording stop events +- `mqtt_message`: MQTT message received +- `auto_recording_event`: Auto-recording status changes + +**Example WebSocket Message**: +```json +{ + "type": "recording_started", + "data": { + "camera_name": "camera1", + "filename": "20240115_103000_auto_recording.avi", + "timestamp": "2024-01-15T10:30:00Z" + }, + "timestamp": "2024-01-15T10:30:00Z" +} +``` + +## 🚀 Quick Start Examples + +### Basic System Monitoring +```bash +# Check system health +curl http://localhost:8000/health + +# Get overall system status +curl http://localhost:8000/system/status + +# Get all camera statuses +curl http://localhost:8000/cameras +``` + +### Manual Recording Control +```bash +# Start recording with default settings +curl -X POST http://localhost:8000/cameras/camera1/start-recording \ + -H "Content-Type: application/json" \ + -d '{"filename": "manual_test.avi"}' + +# Start recording with custom camera settings +curl -X POST http://localhost:8000/cameras/camera1/start-recording \ + -H "Content-Type: application/json" \ + -d '{ + "filename": "high_quality.avi", + "exposure_ms": 2.0, + "gain": 4.0, + "fps": 5.0 + }' + +# Stop recording +curl -X POST http://localhost:8000/cameras/camera1/stop-recording +``` + +### Auto-Recording Management +```bash +# Enable auto-recording for camera1 +curl -X POST http://localhost:8000/cameras/camera1/auto-recording/enable + +# Check auto-recording status +curl http://localhost:8000/auto-recording/status + +# Disable auto-recording for camera1 +curl -X POST http://localhost:8000/cameras/camera1/auto-recording/disable +``` + +### Video Streaming Operations +```bash +# List all videos +curl http://localhost:8000/videos/ + +# List videos from specific camera with metadata +curl "http://localhost:8000/videos/?camera_name=camera1&include_metadata=true" + +# Get video information +curl http://localhost:8000/videos/camera1_recording_20250804_143022.avi + +# Get video thumbnail +curl "http://localhost:8000/videos/camera1_recording_20250804_143022.avi/thumbnail?timestamp=5.0&width=320&height=240" \ + --output thumbnail.jpg + +# Get streaming info +curl http://localhost:8000/videos/camera1_recording_20250804_143022.avi/info + +# Stream video with range request +curl -H "Range: bytes=0-1023" \ + http://localhost:8000/videos/camera1_recording_20250804_143022.avi/stream + +# Validate video file +curl -X POST http://localhost:8000/videos/camera1_recording_20250804_143022.avi/validate + +# Clean up video cache (admin) +curl -X POST "http://localhost:8000/admin/videos/cache/cleanup?max_size_mb=100" +``` + +### Camera Configuration +```bash +# Get current camera configuration +curl http://localhost:8000/cameras/camera1/config + +# Update camera settings (real-time) +curl -X PUT http://localhost:8000/cameras/camera1/config \ + -H "Content-Type: application/json" \ + -d '{ + "exposure_ms": 1.5, + "gain": 3.0, + "sharpness": 130, + "contrast": 120 + }' +``` + +## 📈 Recent API Changes & Enhancements + +### ✨ New in Latest Version + +#### 1. Enhanced Recording API +- **Dynamic camera settings**: Set exposure, gain, and FPS per recording +- **Automatic datetime prefixes**: All filenames get timestamp prefixes +- **Backward compatibility**: Existing API calls work unchanged + +#### 2. Auto-Recording Feature +- **Per-camera control**: Enable/disable auto-recording individually +- **MQTT integration**: Automatic recording based on machine states +- **Retry logic**: Failed recordings are automatically retried +- **Status tracking**: Monitor auto-recording attempts and failures + +#### 3. Advanced Camera Configuration +- **Real-time settings**: Update exposure, gain, image quality without restart +- **Image enhancement**: Sharpness, contrast, saturation, gamma controls +- **Noise reduction**: Configurable noise filtering and 3D denoising +- **HDR support**: High Dynamic Range imaging capabilities + +#### 4. Live Streaming +- **MJPEG streaming**: Real-time camera preview +- **Concurrent operation**: Stream while recording simultaneously +- **Web-compatible**: Direct integration with React/HTML video elements + +#### 5. Enhanced Monitoring +- **MQTT event history**: Track machine state changes over time +- **Storage statistics**: Monitor disk usage and file counts +- **WebSocket updates**: Real-time system status notifications + +#### 6. Video Streaming Module +- **HTTP Range Requests**: Efficient video seeking and progressive download +- **Thumbnail Generation**: Extract preview images from videos at any timestamp +- **Format Conversion**: Automatic AVI to MP4 conversion for web compatibility +- **Intelligent Caching**: Byte-range caching for optimal streaming performance +- **Admin Tools**: Cache management and video validation endpoints + +### 🔄 Migration Notes + +#### From Previous Versions +1. **Recording API**: All existing calls work, but now return filenames with datetime prefixes +2. **Configuration**: New camera settings are optional and backward compatible +3. **Auto-recording**: New feature, requires enabling in `config.json` and per camera + +#### Configuration Updates +```json +{ + "cameras": [ + { + "name": "camera1", + "auto_start_recording_enabled": true, // NEW: Enable auto-recording + "sharpness": 120, // NEW: Image quality settings + "contrast": 110, + "saturation": 100, + "gamma": 100, + "noise_filter_enabled": true, + "hdr_enabled": false + } + ], + "system": { + "auto_recording_enabled": true // NEW: Global auto-recording toggle + } +} +``` + +## 🔗 Related Documentation + +- [📷 Camera Configuration API Guide](api/CAMERA_CONFIG_API.md) - Detailed camera settings +- [🤖 Auto-Recording Feature Guide](features/AUTO_RECORDING_FEATURE_GUIDE.md) - React integration +- [📺 Streaming Guide](guides/STREAMING_GUIDE.md) - Live video streaming +- [🎬 Video Streaming Guide](VIDEO_STREAMING.md) - Video playback and streaming +- [🤖 AI Agent Video Integration Guide](AI_AGENT_VIDEO_INTEGRATION_GUIDE.md) - Complete integration guide for AI agents +- [🔧 Camera Recovery Guide](guides/CAMERA_RECOVERY_GUIDE.md) - Troubleshooting +- [📡 MQTT Logging Guide](guides/MQTT_LOGGING_GUIDE.md) - MQTT configuration + +## 📞 Support & Integration + +### API Base URL +- **Development**: `http://localhost:8000` +- **Production**: Configure in `config.json` under `system.api_host` and `system.api_port` + +### Error Handling +All endpoints return standard HTTP status codes: +- `200`: Success +- `206`: Partial Content (for video range requests) +- `400`: Bad Request (invalid parameters) +- `404`: Resource not found (camera, file, video, etc.) +- `416`: Range Not Satisfiable (invalid video range request) +- `500`: Internal server error +- `503`: Service unavailable (camera manager, MQTT, etc.) + +**Video Streaming Specific Errors:** +- `404`: Video file not found or not streamable +- `416`: Invalid range request (malformed Range header) +- `500`: Failed to read video data or generate thumbnail + +### Rate Limiting +- No rate limiting currently implemented +- WebSocket connections are limited to reasonable concurrent connections + +### CORS Support +- CORS is enabled for web dashboard integration +- Configure allowed origins in the API server settings +``` +``` diff --git a/api/docs/API_QUICK_REFERENCE.md b/api/docs/API_QUICK_REFERENCE.md new file mode 100644 index 0000000..1ec7a54 --- /dev/null +++ b/api/docs/API_QUICK_REFERENCE.md @@ -0,0 +1,195 @@ +# 🚀 USDA Vision Camera System - API Quick Reference + +Quick reference for the most commonly used API endpoints. For complete documentation, see [API_DOCUMENTATION.md](API_DOCUMENTATION.md). + +## 🔧 System Status + +```bash +# Health check +curl http://localhost:8000/health + +# System overview +curl http://localhost:8000/system/status + +# All cameras +curl http://localhost:8000/cameras + +# All machines +curl http://localhost:8000/machines +``` + +## 🎥 Recording Control + +### Start Recording (Basic) +```bash +curl -X POST http://localhost:8000/cameras/camera1/start-recording \ + -H "Content-Type: application/json" \ + -d '{"filename": "test.avi"}' +``` + +### Start Recording (With Settings) +```bash +curl -X POST http://localhost:8000/cameras/camera1/start-recording \ + -H "Content-Type: application/json" \ + -d '{ + "filename": "high_quality.avi", + "exposure_ms": 2.0, + "gain": 4.0, + "fps": 5.0 + }' +``` + +### Stop Recording +```bash +curl -X POST http://localhost:8000/cameras/camera1/stop-recording +``` + +## 🤖 Auto-Recording + +```bash +# Enable auto-recording +curl -X POST http://localhost:8000/cameras/camera1/auto-recording/enable + +# Disable auto-recording +curl -X POST http://localhost:8000/cameras/camera1/auto-recording/disable + +# Check auto-recording status +curl http://localhost:8000/auto-recording/status +``` + +## 🎛️ Camera Configuration + +```bash +# Get camera config +curl http://localhost:8000/cameras/camera1/config + +# Update camera settings +curl -X PUT http://localhost:8000/cameras/camera1/config \ + -H "Content-Type: application/json" \ + -d '{ + "exposure_ms": 1.5, + "gain": 3.0, + "sharpness": 130 + }' +``` + +## 📺 Live Streaming + +```bash +# Start streaming +curl -X POST http://localhost:8000/cameras/camera1/start-stream + +# Get MJPEG stream (use in browser/video element) +# http://localhost:8000/cameras/camera1/stream + +# Stop streaming +curl -X POST http://localhost:8000/cameras/camera1/stop-stream +``` + +## 🔄 Camera Recovery + +```bash +# Test connection +curl -X POST http://localhost:8000/cameras/camera1/test-connection + +# Reconnect camera +curl -X POST http://localhost:8000/cameras/camera1/reconnect + +# Full reset +curl -X POST http://localhost:8000/cameras/camera1/full-reset +``` + +## 💾 Storage Management + +```bash +# Storage statistics +curl http://localhost:8000/storage/stats + +# List files +curl -X POST http://localhost:8000/storage/files \ + -H "Content-Type: application/json" \ + -d '{"camera_name": "camera1", "limit": 10}' + +# Cleanup old files +curl -X POST http://localhost:8000/storage/cleanup \ + -H "Content-Type: application/json" \ + -d '{"max_age_days": 30}' +``` + +## 📡 MQTT Monitoring + +```bash +# MQTT status +curl http://localhost:8000/mqtt/status + +# Recent MQTT events +curl http://localhost:8000/mqtt/events?limit=10 +``` + +## 🌐 WebSocket Connection + +```javascript +// Connect to real-time updates +const ws = new WebSocket('ws://localhost:8000/ws'); + +ws.onmessage = (event) => { + const update = JSON.parse(event.data); + console.log('Update:', update); +}; +``` + +## 📊 Response Examples + +### System Status Response +```json +{ + "system_started": true, + "mqtt_connected": true, + "cameras": { + "camera1": { + "name": "camera1", + "status": "ACTIVE", + "is_recording": false, + "auto_recording_enabled": true + } + }, + "active_recordings": 0, + "total_recordings": 15 +} +``` + +### Recording Start Response +```json +{ + "success": true, + "message": "Recording started for camera1", + "filename": "20240115_103000_test.avi" +} +``` + +### Camera Status Response +```json +{ + "name": "camera1", + "status": "ACTIVE", + "is_recording": false, + "auto_recording_enabled": true, + "auto_recording_active": false, + "auto_recording_failure_count": 0 +} +``` + +## 🔗 Related Documentation + +- [📚 Complete API Documentation](API_DOCUMENTATION.md) +- [🎛️ Camera Configuration Guide](api/CAMERA_CONFIG_API.md) +- [🤖 Auto-Recording Feature Guide](features/AUTO_RECORDING_FEATURE_GUIDE.md) +- [📺 Streaming Guide](guides/STREAMING_GUIDE.md) + +## 💡 Tips + +- All filenames automatically get datetime prefixes: `YYYYMMDD_HHMMSS_` +- Camera settings can be updated in real-time during recording +- Auto-recording is controlled per camera and globally +- WebSocket provides real-time updates for dashboard integration +- CORS is enabled for web application integration diff --git a/api/docs/CURRENT_CONFIGURATION.md b/api/docs/CURRENT_CONFIGURATION.md new file mode 100644 index 0000000..905c657 --- /dev/null +++ b/api/docs/CURRENT_CONFIGURATION.md @@ -0,0 +1,217 @@ +# 📋 Current System Configuration Reference + +## Overview +This document shows the exact current configuration structure of the USDA Vision Camera System, including all fields and their current values. + +## 🔧 Complete Configuration Structure + +### System Configuration (`config.json`) + +```json +{ + "mqtt": { + "broker_host": "192.168.1.110", + "broker_port": 1883, + "username": null, + "password": null, + "topics": { + "vibratory_conveyor": "vision/vibratory_conveyor/state", + "blower_separator": "vision/blower_separator/state" + } + }, + "storage": { + "base_path": "/storage", + "max_file_size_mb": 1000, + "max_recording_duration_minutes": 60, + "cleanup_older_than_days": 30 + }, + "system": { + "camera_check_interval_seconds": 2, + "log_level": "DEBUG", + "log_file": "usda_vision_system.log", + "api_host": "0.0.0.0", + "api_port": 8000, + "enable_api": true, + "timezone": "America/New_York", + "auto_recording_enabled": true + }, + "cameras": [ + { + "name": "camera1", + "machine_topic": "blower_separator", + "storage_path": "/storage/camera1", + "exposure_ms": 0.3, + "gain": 4.0, + "target_fps": 0, + "enabled": true, + "video_format": "mp4", + "video_codec": "mp4v", + "video_quality": 95, + "auto_start_recording_enabled": true, + "auto_recording_max_retries": 3, + "auto_recording_retry_delay_seconds": 2, + "sharpness": 0, + "contrast": 100, + "saturation": 100, + "gamma": 100, + "noise_filter_enabled": false, + "denoise_3d_enabled": false, + "auto_white_balance": false, + "color_temperature_preset": 0, + "wb_red_gain": 0.94, + "wb_green_gain": 1.0, + "wb_blue_gain": 0.87, + "anti_flicker_enabled": false, + "light_frequency": 0, + "bit_depth": 8, + "hdr_enabled": false, + "hdr_gain_mode": 2 + }, + { + "name": "camera2", + "machine_topic": "vibratory_conveyor", + "storage_path": "/storage/camera2", + "exposure_ms": 0.2, + "gain": 2.0, + "target_fps": 0, + "enabled": true, + "video_format": "mp4", + "video_codec": "mp4v", + "video_quality": 95, + "auto_start_recording_enabled": true, + "auto_recording_max_retries": 3, + "auto_recording_retry_delay_seconds": 2, + "sharpness": 0, + "contrast": 100, + "saturation": 100, + "gamma": 100, + "noise_filter_enabled": false, + "denoise_3d_enabled": false, + "auto_white_balance": false, + "color_temperature_preset": 0, + "wb_red_gain": 1.01, + "wb_green_gain": 1.0, + "wb_blue_gain": 0.87, + "anti_flicker_enabled": false, + "light_frequency": 0, + "bit_depth": 8, + "hdr_enabled": false, + "hdr_gain_mode": 0 + } + ] +} +``` + +## 📊 Configuration Field Reference + +### MQTT Settings +| Field | Value | Description | +|-------|-------|-------------| +| `broker_host` | `"192.168.1.110"` | MQTT broker IP address | +| `broker_port` | `1883` | MQTT broker port | +| `username` | `null` | MQTT authentication (not used) | +| `password` | `null` | MQTT authentication (not used) | + +### MQTT Topics +| Machine | Topic | Camera | +|---------|-------|--------| +| Vibratory Conveyor | `vision/vibratory_conveyor/state` | camera2 | +| Blower Separator | `vision/blower_separator/state` | camera1 | + +### Storage Settings +| Field | Value | Description | +|-------|-------|-------------| +| `base_path` | `"/storage"` | Root storage directory | +| `max_file_size_mb` | `1000` | Maximum file size (1GB) | +| `max_recording_duration_minutes` | `60` | Maximum recording duration | +| `cleanup_older_than_days` | `30` | Auto-cleanup threshold | + +### System Settings +| Field | Value | Description | +|-------|-------|-------------| +| `camera_check_interval_seconds` | `2` | Camera health check interval | +| `log_level` | `"DEBUG"` | Logging verbosity | +| `api_host` | `"0.0.0.0"` | API server bind address | +| `api_port` | `8000` | API server port | +| `timezone` | `"America/New_York"` | System timezone | +| `auto_recording_enabled` | `true` | Enable MQTT-triggered recording | + +## 🎥 Camera Configuration Details + +### Camera 1 (Blower Separator) +| Setting | Value | Description | +|---------|-------|-------------| +| **Basic Settings** | | | +| `name` | `"camera1"` | Camera identifier | +| `machine_topic` | `"blower_separator"` | MQTT topic to monitor | +| `storage_path` | `"/storage/camera1"` | Video storage location | +| `exposure_ms` | `0.3` | Exposure time (milliseconds) | +| `gain` | `4.0` | Camera gain multiplier | +| `target_fps` | `0` | Target FPS (0 = unlimited) | +| **Video Recording** | | | +| `video_format` | `"mp4"` | Video file format | +| `video_codec` | `"mp4v"` | Video codec (MPEG-4) | +| `video_quality` | `95` | Video quality (0-100) | +| **Auto Recording** | | | +| `auto_start_recording_enabled` | `true` | Enable auto-recording | +| `auto_recording_max_retries` | `3` | Max retry attempts | +| `auto_recording_retry_delay_seconds` | `2` | Delay between retries | +| **Image Quality** | | | +| `sharpness` | `0` | Sharpness adjustment | +| `contrast` | `100` | Contrast level | +| `saturation` | `100` | Color saturation | +| `gamma` | `100` | Gamma correction | +| **White Balance** | | | +| `auto_white_balance` | `false` | Auto white balance disabled | +| `wb_red_gain` | `0.94` | Red channel gain | +| `wb_green_gain` | `1.0` | Green channel gain | +| `wb_blue_gain` | `0.87` | Blue channel gain | +| **Advanced** | | | +| `bit_depth` | `8` | Color bit depth | +| `hdr_enabled` | `false` | HDR disabled | +| `hdr_gain_mode` | `2` | HDR gain mode | + +### Camera 2 (Vibratory Conveyor) +| Setting | Value | Difference from Camera 1 | +|---------|-------|--------------------------| +| `name` | `"camera2"` | Different identifier | +| `machine_topic` | `"vibratory_conveyor"` | Different MQTT topic | +| `storage_path` | `"/storage/camera2"` | Different storage path | +| `exposure_ms` | `0.2` | Faster exposure (0.2 vs 0.3) | +| `gain` | `2.0` | Lower gain (2.0 vs 4.0) | +| `wb_red_gain` | `1.01` | Different red balance (1.01 vs 0.94) | +| `hdr_gain_mode` | `0` | Different HDR mode (0 vs 2) | + +*All other settings are identical to Camera 1* + +## 🔄 Recent Changes + +### MP4 Format Update +- **Added**: `video_format`, `video_codec`, `video_quality` fields +- **Changed**: Default recording format from AVI to MP4 +- **Impact**: Requires service restart to take effect + +### Current Status +- ✅ Configuration updated with MP4 settings +- ⚠️ Service restart required to apply changes +- 📁 Existing AVI files remain accessible + +## 📝 Notes + +1. **Target FPS = 0**: Both cameras use unlimited frame rate for maximum capture speed +2. **Auto Recording**: Both cameras automatically start recording when their respective machines turn on +3. **White Balance**: Manual white balance settings optimized for each camera's environment +4. **Storage**: Each camera has its own dedicated storage directory +5. **Video Quality**: Set to 95/100 for high-quality recordings with MP4 compression benefits + +## 🔧 Configuration Management + +To modify these settings: +1. Edit `config.json` file +2. Restart the camera service: `sudo ./start_system.sh` +3. Verify changes via API: `GET /cameras/{camera_name}/config` + +For real-time settings (exposure, gain, fps), use the API without restart: +```bash +PUT /cameras/{camera_name}/config +``` diff --git a/api/docs/MP4_FORMAT_UPDATE.md b/api/docs/MP4_FORMAT_UPDATE.md new file mode 100644 index 0000000..ecae663 --- /dev/null +++ b/api/docs/MP4_FORMAT_UPDATE.md @@ -0,0 +1,211 @@ +# 🎥 MP4 Video Format Update - Frontend Integration Guide + +## Overview +The USDA Vision Camera System has been updated to record videos in **MP4 format** instead of AVI format for better streaming compatibility and smaller file sizes. + +## 🔄 What Changed + +### Video Format +- **Before**: AVI files with XVID codec (`.avi` extension) +- **After**: MP4 files with MPEG-4 codec (`.mp4` extension) + +### File Extensions +- All new video recordings now use `.mp4` extension +- Existing `.avi` files remain accessible and functional +- File size reduction: ~40% smaller than equivalent AVI files + +### API Response Updates +New fields added to camera configuration responses: + +```json +{ + "video_format": "mp4", // File format: "mp4" or "avi" + "video_codec": "mp4v", // Video codec: "mp4v", "XVID", "MJPG" + "video_quality": 95 // Quality: 0-100 (higher = better) +} +``` + +## 🌐 Frontend Impact + +### 1. Video Player Compatibility +**✅ Better Browser Support** +- MP4 format has native support in all modern browsers +- No need for additional codecs or plugins +- Better mobile device compatibility (iOS/Android) + +### 2. File Handling Updates +**File Extension Handling** +```javascript +// Update file extension checks +const isVideoFile = (filename) => { + return filename.endsWith('.mp4') || filename.endsWith('.avi'); +}; + +// Video MIME type detection +const getVideoMimeType = (filename) => { + if (filename.endsWith('.mp4')) return 'video/mp4'; + if (filename.endsWith('.avi')) return 'video/x-msvideo'; + return 'video/mp4'; // default +}; +``` + +### 3. Video Streaming +**Improved Streaming Performance** +```javascript +// MP4 files can be streamed directly without conversion +const videoUrl = `/api/videos/${videoId}/stream`; + +// For HTML5 video element + +``` + +### 4. File Size Display +**Updated Size Expectations** +- MP4 files are ~40% smaller than equivalent AVI files +- Update any file size warnings or storage calculations +- Better compression means faster downloads and uploads + +## 📡 API Changes + +### Camera Configuration Endpoint +**GET** `/cameras/{camera_name}/config` + +**New Response Fields:** +```json +{ + "name": "camera1", + "machine_topic": "blower_separator", + "storage_path": "/storage/camera1", + "exposure_ms": 0.3, + "gain": 4.0, + "target_fps": 0, + "enabled": true, + "video_format": "mp4", + "video_codec": "mp4v", + "video_quality": 95, + "auto_start_recording_enabled": true, + "auto_recording_max_retries": 3, + "auto_recording_retry_delay_seconds": 2, + + // ... other existing fields +} +``` + +### Video Listing Endpoints +**File Extension Updates** +- Video files in responses will now have `.mp4` extensions +- Existing `.avi` files will still appear in listings +- Filter by both extensions when needed + +## 🔧 Configuration Options + +### Video Format Settings +```json +{ + "video_format": "mp4", // Options: "mp4", "avi" + "video_codec": "mp4v", // Options: "mp4v", "XVID", "MJPG" + "video_quality": 95 // Range: 0-100 (higher = better quality) +} +``` + +### Recommended Settings +- **Production**: `"mp4"` format, `"mp4v"` codec, `95` quality +- **Storage Optimized**: `"mp4"` format, `"mp4v"` codec, `85` quality +- **Legacy Mode**: `"avi"` format, `"XVID"` codec, `95` quality + +## 🎯 Frontend Implementation Checklist + +### ✅ Video Player Updates +- [ ] Verify HTML5 video player works with MP4 files +- [ ] Update video MIME type handling +- [ ] Test streaming performance with new format + +### ✅ File Management +- [ ] Update file extension filters to include `.mp4` +- [ ] Modify file type detection logic +- [ ] Update download/upload handling for MP4 files + +### ✅ UI/UX Updates +- [ ] Update file size expectations in UI +- [ ] Modify any format-specific icons or indicators +- [ ] Update help text or tooltips mentioning video formats + +### ✅ Configuration Interface +- [ ] Add video format settings to camera config UI +- [ ] Include video quality slider/selector +- [ ] Add restart warning for video format changes + +### ✅ Testing +- [ ] Test video playback with new MP4 files +- [ ] Verify backward compatibility with existing AVI files +- [ ] Test streaming performance and loading times + +## 🔄 Backward Compatibility + +### Existing AVI Files +- All existing `.avi` files remain fully functional +- No conversion or migration required +- Video player should handle both formats + +### API Compatibility +- All existing API endpoints continue to work +- New fields are additive (won't break existing code) +- Default values provided for new configuration fields + +## 📊 Performance Benefits + +### File Size Reduction +``` +Example 5-minute recording at 1280x1024: +- AVI/XVID: ~180 MB +- MP4/MPEG-4: ~108 MB (40% reduction) +``` + +### Streaming Improvements +- Faster initial load times +- Better progressive download support +- Reduced bandwidth usage +- Native browser optimization + +### Storage Efficiency +- More recordings fit in same storage space +- Faster backup and transfer operations +- Reduced storage costs over time + +## 🚨 Important Notes + +### Restart Required +- Video format changes require camera service restart +- Mark video format settings as "restart required" in UI +- Provide clear user feedback about restart necessity + +### Browser Compatibility +- MP4 format supported in all modern browsers +- Better mobile device support than AVI +- No additional plugins or codecs needed + +### Quality Assurance +- Video quality maintained at 95/100 setting +- No visual degradation compared to AVI +- High bitrate ensures professional quality + +## 🔗 Related Documentation + +- [API Documentation](API_DOCUMENTATION.md) - Complete API reference +- [Camera Configuration API](api/CAMERA_CONFIG_API.md) - Detailed config options +- [Video Streaming Guide](VIDEO_STREAMING.md) - Streaming implementation +- [MP4 Conversion Summary](../MP4_CONVERSION_SUMMARY.md) - Technical details + +## 📞 Support + +If you encounter any issues with the MP4 format update: + +1. **Video Playback Issues**: Check browser console for codec errors +2. **File Size Concerns**: Verify quality settings in camera config +3. **Streaming Problems**: Test with both MP4 and AVI files for comparison +4. **API Integration**: Refer to updated API documentation + +The MP4 format provides better web compatibility and performance while maintaining the same high video quality required for the USDA vision system. diff --git a/api/docs/PROJECT_COMPLETE.md b/api/docs/PROJECT_COMPLETE.md new file mode 100644 index 0000000..0f4df48 --- /dev/null +++ b/api/docs/PROJECT_COMPLETE.md @@ -0,0 +1,212 @@ +# 🎉 USDA Vision Camera System - PROJECT COMPLETE! + +## ✅ Final Status: READY FOR PRODUCTION + +The USDA Vision Camera System has been successfully implemented, tested, and documented. All requirements have been met and the system is production-ready. + +## 📋 Completed Requirements + +### ✅ Core Functionality +- **MQTT Integration**: Dual topic listening for machine states +- **Automatic Recording**: Camera recording triggered by machine on/off states +- **GigE Camera Support**: Full integration with camera SDK library +- **Multi-threading**: Concurrent MQTT + camera monitoring + recording +- **File Management**: Timestamp-based naming in organized directories + +### ✅ Advanced Features +- **REST API**: Complete FastAPI server with all endpoints +- **WebSocket Support**: Real-time updates for dashboard integration +- **Time Synchronization**: Atlanta, Georgia timezone with NTP sync +- **Storage Management**: File indexing, cleanup, and statistics +- **Comprehensive Logging**: Rotating logs with error tracking +- **Configuration System**: JSON-based configuration management + +### ✅ Documentation & Testing +- **Complete README**: Installation, usage, API docs, troubleshooting +- **Test Suite**: Comprehensive system testing (`test_system.py`) +- **Time Verification**: Timezone and sync testing (`check_time.py`) +- **Startup Scripts**: Easy deployment with `start_system.sh` +- **Clean Repository**: Organized structure with proper .gitignore + +## 🏗️ Final Project Structure + +``` +USDA-Vision-Cameras/ +├── README.md # Complete documentation +├── main.py # System entry point +├── config.json # System configuration +├── requirements.txt # Python dependencies +├── pyproject.toml # UV package configuration +├── .gitignore # Git ignore rules +├── start_system.sh # Startup script +├── setup_timezone.sh # Time sync setup +├── test_system.py # System test suite +├── check_time.py # Time verification +├── test_timezone.py # Timezone testing +├── usda_vision_system/ # Main application +│ ├── core/ # Core functionality +│ ├── mqtt/ # MQTT integration +│ ├── camera/ # Camera management +│ ├── storage/ # File management +│ ├── api/ # REST API server +│ └── main.py # Application coordinator +├── camera_sdk/ # GigE camera SDK library +├── demos/ # Demo and example code +│ ├── cv_grab*.py # Camera SDK usage examples +│ └── mqtt_*.py # MQTT demo scripts +├── storage/ # Recording storage +│ ├── camera1/ # Camera 1 recordings +│ └── camera2/ # Camera 2 recordings +├── tests/ # Test files and legacy tests +├── notebooks/ # Jupyter notebooks +└── docs/ # Documentation files +``` + +## 🚀 How to Deploy + +### 1. Clone and Setup +```bash +git clone https://github.com/your-username/USDA-Vision-Cameras.git +cd USDA-Vision-Cameras +uv sync +``` + +### 2. Configure System +```bash +# Edit config.json for your environment +# Set MQTT broker, camera settings, storage paths +``` + +### 3. Setup Time Sync +```bash +./setup_timezone.sh +``` + +### 4. Test System +```bash +python test_system.py +``` + +### 5. Start System +```bash +./start_system.sh +``` + +## 🌐 API Integration + +### Dashboard Integration +```javascript +// React component example +const systemStatus = await fetch('http://localhost:8000/system/status'); +const cameras = await fetch('http://localhost:8000/cameras'); + +// WebSocket for real-time updates +const ws = new WebSocket('ws://localhost:8000/ws'); +ws.onmessage = (event) => { + const update = JSON.parse(event.data); + // Handle real-time system updates +}; +``` + +### Manual Control +```bash +# Start recording manually +curl -X POST http://localhost:8000/cameras/camera1/start-recording + +# Stop recording manually +curl -X POST http://localhost:8000/cameras/camera1/stop-recording + +# Get system status +curl http://localhost:8000/system/status +``` + +## 📊 System Capabilities + +### Discovered Hardware +- **2 GigE Cameras**: Blower-Yield-Cam, Cracker-Cam +- **Network Ready**: Cameras accessible at 192.168.1.165, 192.168.1.167 +- **MQTT Ready**: Configured for broker at 192.168.1.110 + +### Recording Features +- **Automatic Start/Stop**: Based on MQTT machine states +- **Timezone Aware**: Atlanta time timestamps (EST/EDT) +- **Organized Storage**: Separate directories per camera +- **File Naming**: `camera1_recording_20250725_213000.avi` +- **Manual Control**: API endpoints for manual recording + +### Monitoring Features +- **Real-time Status**: Camera and machine state monitoring +- **Health Checks**: Automatic system health verification +- **Performance Tracking**: Recording metrics and system stats +- **Error Handling**: Comprehensive error tracking and recovery + +## 🔧 Maintenance + +### Regular Tasks +- **Log Monitoring**: Check `usda_vision_system.log` +- **Storage Cleanup**: Automatic cleanup of old recordings +- **Time Sync**: Automatic NTP synchronization +- **Health Checks**: Built-in system monitoring + +### Troubleshooting +- **Test Suite**: `python test_system.py` +- **Time Check**: `python check_time.py` +- **API Health**: `curl http://localhost:8000/health` +- **Debug Mode**: `python main.py --log-level DEBUG` + +## 🎯 Production Readiness + +### ✅ All Tests Passing +- System initialization: ✅ +- Camera discovery: ✅ (2 cameras found) +- MQTT configuration: ✅ +- Storage setup: ✅ +- Time synchronization: ✅ +- API endpoints: ✅ + +### ✅ Documentation Complete +- Installation guide: ✅ +- Configuration reference: ✅ +- API documentation: ✅ +- Troubleshooting guide: ✅ +- Integration examples: ✅ + +### ✅ Production Features +- Error handling: ✅ +- Logging system: ✅ +- Time synchronization: ✅ +- Storage management: ✅ +- API security: ✅ +- Performance monitoring: ✅ + +## 🚀 Next Steps + +The system is now ready for: + +1. **Production Deployment**: Deploy on target hardware +2. **Dashboard Integration**: Connect to React + Supabase dashboard +3. **MQTT Configuration**: Connect to production MQTT broker +4. **Camera Calibration**: Fine-tune camera settings for production +5. **Monitoring Setup**: Configure production monitoring and alerts + +## 📞 Support + +For ongoing support: +- **Documentation**: Complete README.md with troubleshooting +- **Test Suite**: Comprehensive diagnostic tools +- **Logging**: Detailed system logs for debugging +- **API Health**: Built-in health check endpoints + +--- + +**🎊 PROJECT STATUS: COMPLETE AND PRODUCTION-READY! 🎊** + +The USDA Vision Camera System is fully implemented, tested, and documented. All original requirements have been met, and the system is ready for production deployment with your React dashboard integration. + +**Key Achievements:** +- ✅ Dual MQTT topic monitoring +- ✅ Automatic camera recording +- ✅ Atlanta timezone synchronization +- ✅ Complete REST API +- ✅ Comprehensive documentation +- ✅ Production-ready deployment diff --git a/api/docs/REACT_INTEGRATION_GUIDE.md b/api/docs/REACT_INTEGRATION_GUIDE.md new file mode 100644 index 0000000..29170f9 --- /dev/null +++ b/api/docs/REACT_INTEGRATION_GUIDE.md @@ -0,0 +1,276 @@ +# 🚀 React Frontend Integration Guide - MP4 Update + +## 🎯 Quick Summary for React Team + +The camera system now records in **MP4 format** instead of AVI. This provides better web compatibility and smaller file sizes. + +## 🔄 What You Need to Update + +### 1. File Extension Handling +```javascript +// OLD: Only checked for .avi +const isVideoFile = (filename) => filename.endsWith('.avi'); + +// NEW: Check for both formats +const isVideoFile = (filename) => { + return filename.endsWith('.mp4') || filename.endsWith('.avi'); +}; + +// Video MIME types +const getVideoMimeType = (filename) => { + if (filename.endsWith('.mp4')) return 'video/mp4'; + if (filename.endsWith('.avi')) return 'video/x-msvideo'; + return 'video/mp4'; // default for new files +}; +``` + +### 2. Video Player Component +```jsx +// MP4 files work better with HTML5 video +const VideoPlayer = ({ videoUrl, filename }) => { + const mimeType = getVideoMimeType(filename); + + return ( + + ); +}; +``` + +### 3. Camera Configuration Interface +Add these new fields to your camera config forms: + +```jsx +const CameraConfigForm = () => { + const [config, setConfig] = useState({ + // ... existing fields + video_format: 'mp4', // 'mp4' or 'avi' + video_codec: 'mp4v', // 'mp4v', 'XVID', 'MJPG' + video_quality: 95 // 0-100 + }); + + return ( +
+ {/* ... existing fields */} + +
+

Video Recording Settings

+ + + + + + setConfig({...config, video_quality: parseInt(e.target.value)})} + /> + + +
+ ⚠️ Video format changes require camera restart +
+
+
+ ); +}; +``` + +## 📡 API Response Changes + +### Camera Configuration Response +```json +{ + "name": "camera1", + "machine_topic": "blower_separator", + "storage_path": "/storage/camera1", + "exposure_ms": 0.3, + "gain": 4.0, + "target_fps": 0, + "enabled": true, + "video_format": "mp4", + "video_codec": "mp4v", + "video_quality": 95, + "auto_start_recording_enabled": true, + "auto_recording_max_retries": 3, + "auto_recording_retry_delay_seconds": 2, + + // ... other existing fields +} +``` + +### Video File Listings +```json +{ + "videos": [ + { + "file_id": "camera1_recording_20250804_143022.mp4", + "filename": "camera1_recording_20250804_143022.mp4", + "format": "mp4", + "file_size_bytes": 31457280, + "created_at": "2025-08-04T14:30:22" + } + ] +} +``` + +## 🎨 UI/UX Improvements + +### File Size Display +```javascript +// MP4 files are ~40% smaller +const formatFileSize = (bytes) => { + const mb = bytes / (1024 * 1024); + return `${mb.toFixed(1)} MB`; +}; + +// Show format in file listings +const FileListItem = ({ video }) => ( +
+ {video.filename} + + {video.format.toUpperCase()} + + {formatFileSize(video.file_size_bytes)} +
+); +``` + +### Format Indicators +```css +.format.mp4 { + background: #4CAF50; + color: white; + padding: 2px 6px; + border-radius: 3px; + font-size: 0.8em; +} + +.format.avi { + background: #FF9800; + color: white; + padding: 2px 6px; + border-radius: 3px; + font-size: 0.8em; +} +``` + +## ⚡ Performance Benefits + +### Streaming Improvements +- **Faster Loading**: MP4 files start playing sooner +- **Better Seeking**: More responsive video scrubbing +- **Mobile Friendly**: Better iOS/Android compatibility +- **Bandwidth Savings**: 40% smaller files = faster transfers + +### Implementation Tips +```javascript +// Preload video metadata for better UX +const VideoThumbnail = ({ videoUrl }) => ( + +); +``` + +## 🔧 Configuration Management + +### Restart Warning Component +```jsx +const RestartWarning = ({ show }) => { + if (!show) return null; + + return ( +
+ ⚠️ Restart Required +

Video format changes require a camera service restart to take effect.

+ +
+ ); +}; +``` + +### Settings Validation +```javascript +const validateVideoSettings = (settings) => { + const errors = {}; + + if (!['mp4', 'avi'].includes(settings.video_format)) { + errors.video_format = 'Must be mp4 or avi'; + } + + if (!['mp4v', 'XVID', 'MJPG'].includes(settings.video_codec)) { + errors.video_codec = 'Invalid codec'; + } + + if (settings.video_quality < 50 || settings.video_quality > 100) { + errors.video_quality = 'Quality must be between 50-100'; + } + + return errors; +}; +``` + +## 📱 Mobile Considerations + +### Responsive Video Player +```jsx +const ResponsiveVideoPlayer = ({ videoUrl, filename }) => ( +
+ +
+); +``` + +## 🧪 Testing Checklist + +- [ ] Video playback works with new MP4 files +- [ ] File extension filtering includes both .mp4 and .avi +- [ ] Camera configuration UI shows video format options +- [ ] Restart warning appears for video format changes +- [ ] File size displays are updated for smaller MP4 files +- [ ] Mobile video playback works correctly +- [ ] Video streaming performance is improved +- [ ] Backward compatibility with existing AVI files + +## 📞 Support + +If you encounter issues: + +1. **Video won't play**: Check browser console for codec errors +2. **File size unexpected**: Verify quality settings in camera config +3. **Streaming slow**: Compare MP4 vs AVI performance +4. **Mobile issues**: Ensure `playsInline` attribute is set + +The MP4 update provides significant improvements in web compatibility and performance while maintaining full backward compatibility with existing AVI files. diff --git a/api/docs/README.md b/api/docs/README.md new file mode 100644 index 0000000..5ba7b70 --- /dev/null +++ b/api/docs/README.md @@ -0,0 +1,100 @@ +# USDA Vision Camera System - Documentation + +This directory contains detailed documentation for the USDA Vision Camera System. + +## Documentation Files + +### 🚀 [API_DOCUMENTATION.md](API_DOCUMENTATION.md) **⭐ NEW** +**Complete API reference documentation** covering all endpoints, features, and recent enhancements: +- System status and health monitoring +- Camera management and configuration +- Recording control with dynamic settings +- Auto-recording management +- MQTT and machine status +- Storage and file management +- Camera recovery and diagnostics +- Live streaming capabilities +- WebSocket real-time updates +- Quick start examples and migration notes + +### ⚡ [API_QUICK_REFERENCE.md](API_QUICK_REFERENCE.md) **⭐ NEW** +**Quick reference card** for the most commonly used API endpoints with curl examples and response formats. + +### 📋 [PROJECT_COMPLETE.md](PROJECT_COMPLETE.md) +Complete project overview and final status documentation. Contains: +- Project completion status +- Final system architecture +- Deployment instructions +- Production readiness checklist + +### 🎥 [MP4_FORMAT_UPDATE.md](MP4_FORMAT_UPDATE.md) **⭐ NEW** +**Frontend integration guide** for the MP4 video format update: +- Video format changes from AVI to MP4 +- Frontend implementation checklist +- API response updates +- Performance benefits and browser compatibility + +### 🚀 [REACT_INTEGRATION_GUIDE.md](REACT_INTEGRATION_GUIDE.md) **⭐ NEW** +**Quick reference for React developers** implementing the MP4 format changes: +- Code examples and components +- File handling updates +- Configuration interface +- Testing checklist + +### 📋 [CURRENT_CONFIGURATION.md](CURRENT_CONFIGURATION.md) **⭐ NEW** +**Complete current system configuration reference**: +- Exact config.json structure with all current values +- Field-by-field documentation +- Camera-specific settings comparison +- MQTT topics and machine mappings + +### 🎬 [VIDEO_STREAMING.md](VIDEO_STREAMING.md) **⭐ UPDATED** +**Complete video streaming module documentation**: +- Comprehensive API endpoint documentation +- Authentication and security information +- Error handling and troubleshooting +- Performance optimization guidelines + +### 🤖 [AI_AGENT_VIDEO_INTEGRATION_GUIDE.md](AI_AGENT_VIDEO_INTEGRATION_GUIDE.md) **⭐ NEW** +**Complete integration guide for AI agents and external systems**: +- Step-by-step integration workflow +- Programming language examples (Python, JavaScript) +- Error handling and debugging strategies +- Performance optimization recommendations + +### 🔧 [API_CHANGES_SUMMARY.md](API_CHANGES_SUMMARY.md) +Summary of API changes and enhancements made to the system. + +### 📷 [CAMERA_RECOVERY_GUIDE.md](CAMERA_RECOVERY_GUIDE.md) +Guide for camera recovery procedures and troubleshooting camera-related issues. + +### 📡 [MQTT_LOGGING_GUIDE.md](MQTT_LOGGING_GUIDE.md) +Comprehensive guide for MQTT logging configuration and troubleshooting. + +## Main Documentation + +The main system documentation is located in the root directory: +- **[../README.md](../README.md)** - Primary system documentation with installation, configuration, and usage instructions + +## Additional Resources + +### Demo Code +- **[../demos/](../demos/)** - Demo scripts and camera SDK examples + +### Test Files +- **[../tests/](../tests/)** - Test scripts and legacy test files + +### Jupyter Notebooks +- **[../notebooks/](../notebooks/)** - Interactive notebooks for system exploration and testing + +## Quick Links + +- [System Installation](../README.md#installation) +- [Configuration Guide](../README.md#configuration) +- [API Documentation](../README.md#api-reference) +- [Troubleshooting](../README.md#troubleshooting) +- [Camera SDK Examples](../demos/camera_sdk_examples/) + +## Support + +For technical support and questions, refer to the main [README.md](../README.md) troubleshooting section or check the system logs. diff --git a/api/docs/VIDEO_STREAMING.md b/api/docs/VIDEO_STREAMING.md new file mode 100644 index 0000000..69b9d6e --- /dev/null +++ b/api/docs/VIDEO_STREAMING.md @@ -0,0 +1,601 @@ +# 🎬 Video Streaming Module + +The USDA Vision Camera System now includes a modular video streaming system that provides YouTube-like video playback capabilities for your React web application. + +## 🌟 Features + +- **Progressive Streaming** - True chunked streaming for web browsers (no download required) +- **HTTP Range Request Support** - Enables seeking and progressive download with 206 Partial Content +- **Native MP4 Support** - Direct streaming of MP4 files optimized for web playback +- **Memory Efficient** - 8KB chunked delivery, no large file loading into memory +- **Browser Compatible** - Works with HTML5 `