feat: Add CameraPreviewModal component for live camera streaming
feat: Implement useAuth hook for user authentication management feat: Create useAutoRecording hook for managing automatic recording functionality feat: Develop AutoRecordingManager to handle automatic recording based on MQTT events test: Add test script to verify camera configuration API fix test: Create HTML page for testing camera configuration API and auto-recording fields
This commit is contained in:
175
API Documentations/AI_AGENT_INSTRUCTIONS.md
Normal file
175
API Documentations/AI_AGENT_INSTRUCTIONS.md
Normal file
@@ -0,0 +1,175 @@
|
||||
# Instructions for AI Agent: Auto-Recording Feature Integration
|
||||
|
||||
## 🎯 Task Overview
|
||||
Update the React application to support the new auto-recording feature that has been added to the USDA Vision Camera System backend.
|
||||
|
||||
## 📋 What You Need to Know
|
||||
|
||||
### System Context
|
||||
- **Camera 1** monitors the **vibratory conveyor** (conveyor/cracker cam)
|
||||
- **Camera 2** monitors the **blower separator** machine
|
||||
- Auto-recording automatically starts when machines turn ON and stops when they turn OFF
|
||||
- The system includes retry logic for failed recording attempts
|
||||
- Manual recording always takes precedence over auto-recording
|
||||
|
||||
### New Backend Capabilities
|
||||
The backend now supports:
|
||||
1. **Automatic recording** triggered by MQTT machine state changes
|
||||
2. **Retry mechanism** for failed recording attempts (configurable retries and delays)
|
||||
3. **Status tracking** for auto-recording state, failures, and attempts
|
||||
4. **API endpoints** for enabling/disabling and monitoring auto-recording
|
||||
|
||||
## 🔧 Required React App Changes
|
||||
|
||||
### 1. Update TypeScript Interfaces
|
||||
|
||||
Add these new fields to existing `CameraStatusResponse`:
|
||||
```typescript
|
||||
interface CameraStatusResponse {
|
||||
// ... existing fields
|
||||
auto_recording_enabled: boolean;
|
||||
auto_recording_active: boolean;
|
||||
auto_recording_failure_count: number;
|
||||
auto_recording_last_attempt?: string;
|
||||
auto_recording_last_error?: string;
|
||||
}
|
||||
```
|
||||
|
||||
Add new response types:
|
||||
```typescript
|
||||
interface AutoRecordingConfigResponse {
|
||||
success: boolean;
|
||||
message: string;
|
||||
camera_name: string;
|
||||
enabled: boolean;
|
||||
}
|
||||
|
||||
interface AutoRecordingStatusResponse {
|
||||
running: boolean;
|
||||
auto_recording_enabled: boolean;
|
||||
retry_queue: Record<string, any>;
|
||||
enabled_cameras: string[];
|
||||
}
|
||||
```
|
||||
|
||||
### 2. Add New API Endpoints
|
||||
|
||||
```typescript
|
||||
// Enable auto-recording for a camera
|
||||
POST /cameras/{camera_name}/auto-recording/enable
|
||||
|
||||
// Disable auto-recording for a camera
|
||||
POST /cameras/{camera_name}/auto-recording/disable
|
||||
|
||||
// Get overall auto-recording system status
|
||||
GET /auto-recording/status
|
||||
```
|
||||
|
||||
### 3. UI Components to Add/Update
|
||||
|
||||
#### Camera Status Display
|
||||
- Add auto-recording status badge/indicator
|
||||
- Show auto-recording enabled/disabled state
|
||||
- Display failure count if > 0
|
||||
- Show last error message if any
|
||||
- Distinguish between manual and auto-recording states
|
||||
|
||||
#### Auto-Recording Controls
|
||||
- Toggle switch to enable/disable auto-recording per camera
|
||||
- System-wide auto-recording status display
|
||||
- Retry queue information
|
||||
- Machine state correlation display
|
||||
|
||||
#### Error Handling
|
||||
- Clear display of auto-recording failures
|
||||
- Retry attempt information
|
||||
- Last attempt timestamp
|
||||
- Quick retry/reset actions
|
||||
|
||||
### 4. Visual Design Guidelines
|
||||
|
||||
**Status Priority (highest to lowest):**
|
||||
1. Manual Recording (red/prominent) - user initiated
|
||||
2. Auto-Recording Active (green) - machine ON, recording
|
||||
3. Auto-Recording Enabled (blue) - ready but machine OFF
|
||||
4. Auto-Recording Disabled (gray) - feature disabled
|
||||
|
||||
**Machine Correlation:**
|
||||
- Show machine name next to camera (e.g., "Vibratory Conveyor", "Blower Separator")
|
||||
- Display machine ON/OFF status
|
||||
- Alert if machine is ON but auto-recording failed
|
||||
|
||||
## 🎨 Specific Implementation Tasks
|
||||
|
||||
### Task 1: Update Camera Cards
|
||||
- Add auto-recording status indicators
|
||||
- Add enable/disable toggle controls
|
||||
- Show machine state correlation
|
||||
- Display failure information when relevant
|
||||
|
||||
### Task 2: Create Auto-Recording Dashboard
|
||||
- Overall system status
|
||||
- List of enabled cameras
|
||||
- Active retry queue display
|
||||
- Recent events/errors
|
||||
|
||||
### Task 3: Update Recording Status Logic
|
||||
- Distinguish between manual and auto-recording
|
||||
- Show appropriate controls based on recording type
|
||||
- Handle manual override scenarios
|
||||
|
||||
### Task 4: Add Error Handling
|
||||
- Display auto-recording failures clearly
|
||||
- Show retry attempts and timing
|
||||
- Provide manual retry options
|
||||
|
||||
## 📱 User Experience Requirements
|
||||
|
||||
### Key Behaviors
|
||||
1. **Non-Intrusive:** Auto-recording status shouldn't clutter the main interface
|
||||
2. **Clear Hierarchy:** Manual controls should be more prominent than auto-recording
|
||||
3. **Informative:** Users should understand why recording started/stopped
|
||||
4. **Actionable:** Clear options to enable/disable or retry failed attempts
|
||||
|
||||
### Mobile Considerations
|
||||
- Auto-recording controls should work well on mobile
|
||||
- Status information should be readable on small screens
|
||||
- Consider collapsible sections for detailed information
|
||||
|
||||
## 🔍 Testing Requirements
|
||||
|
||||
Ensure the React app correctly handles:
|
||||
- [ ] Toggling auto-recording on/off per camera
|
||||
- [ ] Displaying real-time status updates
|
||||
- [ ] Showing error states and retry information
|
||||
- [ ] Manual recording override scenarios
|
||||
- [ ] Machine state changes and correlation
|
||||
- [ ] Mobile interface functionality
|
||||
|
||||
## 📚 Reference Files
|
||||
|
||||
Key files to review for implementation details:
|
||||
- `AUTO_RECORDING_FEATURE_GUIDE.md` - Comprehensive technical details
|
||||
- `api-endpoints.http` - API endpoint documentation
|
||||
- `config.json` - Configuration structure
|
||||
- `usda_vision_system/api/models.py` - Response type definitions
|
||||
|
||||
## 🎯 Success Criteria
|
||||
|
||||
The React app should:
|
||||
1. **Display** auto-recording status for each camera clearly
|
||||
2. **Allow** users to enable/disable auto-recording per camera
|
||||
3. **Show** machine state correlation and recording triggers
|
||||
4. **Handle** error states and retry scenarios gracefully
|
||||
5. **Maintain** existing manual recording functionality
|
||||
6. **Provide** clear visual hierarchy between manual and auto-recording
|
||||
|
||||
## 💡 Implementation Tips
|
||||
|
||||
1. **Start Small:** Begin with basic status display, then add controls
|
||||
2. **Use Existing Patterns:** Follow the current app's design patterns
|
||||
3. **Test Incrementally:** Test each feature as you add it
|
||||
4. **Consider State Management:** Update your state management to handle new data
|
||||
5. **Mobile First:** Ensure mobile usability from the start
|
||||
|
||||
The goal is to seamlessly integrate auto-recording capabilities while maintaining the existing user experience and adding valuable automation features for the camera operators.
|
||||
@@ -7,12 +7,18 @@ This guide is specifically designed for AI assistants to understand and implemen
|
||||
The USDA Vision Camera system provides live video streaming through REST API endpoints. The streaming uses MJPEG format which is natively supported by HTML `<img>` tags and can be easily integrated into React components.
|
||||
|
||||
### Key Characteristics:
|
||||
- **Base URL**: `http://localhost:8000` (configurable)
|
||||
- **Base URL**: `http://vision:8000` (production) or `http://localhost:8000` (development)
|
||||
- **Stream Format**: MJPEG (Motion JPEG)
|
||||
- **Content-Type**: `multipart/x-mixed-replace; boundary=frame`
|
||||
- **Authentication**: None (add if needed for production)
|
||||
- **CORS**: Enabled for all origins (configure for production)
|
||||
|
||||
### Base URL Configuration:
|
||||
- **Production**: `http://vision:8000` (requires hostname setup)
|
||||
- **Development**: `http://localhost:8000` (local testing)
|
||||
- **Custom IP**: `http://192.168.1.100:8000` (replace with actual IP)
|
||||
- **Custom hostname**: Configure DNS or /etc/hosts as needed
|
||||
|
||||
## 🔌 API Endpoints Reference
|
||||
|
||||
### 1. Get Camera List
|
||||
@@ -71,7 +77,7 @@ GET /cameras/{camera_name}/stream
|
||||
```jsx
|
||||
import React, { useState, useEffect } from 'react';
|
||||
|
||||
const CameraStream = ({ cameraName, apiBaseUrl = 'http://localhost:8000' }) => {
|
||||
const CameraStream = ({ cameraName, apiBaseUrl = 'http://vision:8000' }) => {
|
||||
const [isStreaming, setIsStreaming] = useState(false);
|
||||
const [error, setError] = useState(null);
|
||||
const [loading, setLoading] = useState(false);
|
||||
@@ -221,7 +227,7 @@ export default CameraStream;
|
||||
import React, { useState, useEffect } from 'react';
|
||||
import CameraStream from './CameraStream';
|
||||
|
||||
const CameraDashboard = ({ apiBaseUrl = 'http://localhost:8000' }) => {
|
||||
const CameraDashboard = ({ apiBaseUrl = 'http://vision:8000' }) => {
|
||||
const [cameras, setCameras] = useState({});
|
||||
const [loading, setLoading] = useState(true);
|
||||
const [error, setError] = useState(null);
|
||||
@@ -309,7 +315,7 @@ export default CameraDashboard;
|
||||
```jsx
|
||||
import { useState, useEffect, useCallback } from 'react';
|
||||
|
||||
const useCameraStream = (cameraName, apiBaseUrl = 'http://localhost:8000') => {
|
||||
const useCameraStream = (cameraName, apiBaseUrl = 'http://vision:8000') => {
|
||||
const [isStreaming, setIsStreaming] = useState(false);
|
||||
const [loading, setLoading] = useState(false);
|
||||
const [error, setError] = useState(null);
|
||||
@@ -444,20 +450,43 @@ const CameraStreamTailwind = ({ cameraName }) => {
|
||||
|
||||
### Environment Variables (.env)
|
||||
```env
|
||||
REACT_APP_CAMERA_API_URL=http://localhost:8000
|
||||
# Production configuration (using 'vision' hostname)
|
||||
REACT_APP_CAMERA_API_URL=http://vision:8000
|
||||
REACT_APP_STREAM_REFRESH_INTERVAL=30000
|
||||
REACT_APP_STREAM_TIMEOUT=10000
|
||||
|
||||
# Development configuration (using localhost)
|
||||
# REACT_APP_CAMERA_API_URL=http://localhost:8000
|
||||
|
||||
# Custom IP configuration
|
||||
# REACT_APP_CAMERA_API_URL=http://192.168.1.100:8000
|
||||
```
|
||||
|
||||
### API Configuration
|
||||
```javascript
|
||||
const apiConfig = {
|
||||
baseUrl: process.env.REACT_APP_CAMERA_API_URL || 'http://localhost:8000',
|
||||
baseUrl: process.env.REACT_APP_CAMERA_API_URL || 'http://vision:8000',
|
||||
timeout: parseInt(process.env.REACT_APP_STREAM_TIMEOUT) || 10000,
|
||||
refreshInterval: parseInt(process.env.REACT_APP_STREAM_REFRESH_INTERVAL) || 30000,
|
||||
};
|
||||
```
|
||||
|
||||
### Hostname Setup Guide
|
||||
```bash
|
||||
# Option 1: Add to /etc/hosts (Linux/Mac)
|
||||
echo "127.0.0.1 vision" | sudo tee -a /etc/hosts
|
||||
|
||||
# Option 2: Add to hosts file (Windows)
|
||||
# Add to C:\Windows\System32\drivers\etc\hosts:
|
||||
# 127.0.0.1 vision
|
||||
|
||||
# Option 3: Configure DNS
|
||||
# Point 'vision' hostname to your server's IP address
|
||||
|
||||
# Verify hostname resolution
|
||||
ping vision
|
||||
```
|
||||
|
||||
## 🚨 Important Implementation Notes
|
||||
|
||||
### 1. MJPEG Stream Handling
|
||||
260
API Documentations/AUTO_RECORDING_FEATURE_GUIDE.md
Normal file
260
API Documentations/AUTO_RECORDING_FEATURE_GUIDE.md
Normal file
@@ -0,0 +1,260 @@
|
||||
# Auto-Recording Feature Implementation Guide
|
||||
|
||||
## 🎯 Overview for React App Development
|
||||
|
||||
This document provides a comprehensive guide for updating the React application to support the new auto-recording feature that was added to the USDA Vision Camera System.
|
||||
|
||||
## 📋 What Changed in the Backend
|
||||
|
||||
### New API Endpoints Added
|
||||
|
||||
1. **Enable Auto-Recording**
|
||||
```http
|
||||
POST /cameras/{camera_name}/auto-recording/enable
|
||||
Response: AutoRecordingConfigResponse
|
||||
```
|
||||
|
||||
2. **Disable Auto-Recording**
|
||||
```http
|
||||
POST /cameras/{camera_name}/auto-recording/disable
|
||||
Response: AutoRecordingConfigResponse
|
||||
```
|
||||
|
||||
3. **Get Auto-Recording Status**
|
||||
```http
|
||||
GET /auto-recording/status
|
||||
Response: AutoRecordingStatusResponse
|
||||
```
|
||||
|
||||
### Updated API Responses
|
||||
|
||||
#### CameraStatusResponse (Updated)
|
||||
```typescript
|
||||
interface CameraStatusResponse {
|
||||
name: string;
|
||||
status: string;
|
||||
is_recording: boolean;
|
||||
last_checked: string;
|
||||
last_error?: string;
|
||||
device_info?: any;
|
||||
current_recording_file?: string;
|
||||
recording_start_time?: string;
|
||||
|
||||
// NEW AUTO-RECORDING FIELDS
|
||||
auto_recording_enabled: boolean;
|
||||
auto_recording_active: boolean;
|
||||
auto_recording_failure_count: number;
|
||||
auto_recording_last_attempt?: string;
|
||||
auto_recording_last_error?: string;
|
||||
}
|
||||
```
|
||||
|
||||
#### CameraConfigResponse (Updated)
|
||||
```typescript
|
||||
interface CameraConfigResponse {
|
||||
name: string;
|
||||
machine_topic: string;
|
||||
storage_path: string;
|
||||
enabled: boolean;
|
||||
|
||||
// NEW AUTO-RECORDING CONFIG FIELDS
|
||||
auto_start_recording_enabled: boolean;
|
||||
auto_recording_max_retries: number;
|
||||
auto_recording_retry_delay_seconds: number;
|
||||
|
||||
// ... existing fields (exposure_ms, gain, etc.)
|
||||
}
|
||||
```
|
||||
|
||||
#### New Response Types
|
||||
```typescript
|
||||
interface AutoRecordingConfigResponse {
|
||||
success: boolean;
|
||||
message: string;
|
||||
camera_name: string;
|
||||
enabled: boolean;
|
||||
}
|
||||
|
||||
interface AutoRecordingStatusResponse {
|
||||
running: boolean;
|
||||
auto_recording_enabled: boolean;
|
||||
retry_queue: Record<string, any>;
|
||||
enabled_cameras: string[];
|
||||
}
|
||||
```
|
||||
|
||||
## 🎨 React App UI Requirements
|
||||
|
||||
### 1. Camera Status Display Updates
|
||||
|
||||
**Add to Camera Cards/Components:**
|
||||
- Auto-recording enabled/disabled indicator
|
||||
- Auto-recording active status (when machine is ON and auto-recording)
|
||||
- Failure count display (if > 0)
|
||||
- Last auto-recording error (if any)
|
||||
- Visual distinction between manual and auto-recording
|
||||
|
||||
**Example UI Elements:**
|
||||
```jsx
|
||||
// Auto-recording status badge
|
||||
{camera.auto_recording_enabled && (
|
||||
<Badge variant={camera.auto_recording_active ? "success" : "secondary"}>
|
||||
Auto-Recording {camera.auto_recording_active ? "Active" : "Enabled"}
|
||||
</Badge>
|
||||
)}
|
||||
|
||||
// Failure indicator
|
||||
{camera.auto_recording_failure_count > 0 && (
|
||||
<Alert variant="warning">
|
||||
Auto-recording failures: {camera.auto_recording_failure_count}
|
||||
</Alert>
|
||||
)}
|
||||
```
|
||||
|
||||
### 2. Auto-Recording Controls
|
||||
|
||||
**Add Toggle Controls:**
|
||||
- Enable/Disable auto-recording per camera
|
||||
- Global auto-recording status display
|
||||
- Retry queue monitoring
|
||||
|
||||
**Example Control Component:**
|
||||
```jsx
|
||||
const AutoRecordingToggle = ({ camera, onToggle }) => {
|
||||
const handleToggle = async () => {
|
||||
const endpoint = camera.auto_recording_enabled ? 'disable' : 'enable';
|
||||
await fetch(`/cameras/${camera.name}/auto-recording/${endpoint}`, {
|
||||
method: 'POST'
|
||||
});
|
||||
onToggle();
|
||||
};
|
||||
|
||||
return (
|
||||
<Switch
|
||||
checked={camera.auto_recording_enabled}
|
||||
onChange={handleToggle}
|
||||
label="Auto-Recording"
|
||||
/>
|
||||
);
|
||||
};
|
||||
```
|
||||
|
||||
### 3. Machine State Integration
|
||||
|
||||
**Display Machine Status:**
|
||||
- Show which machine each camera monitors
|
||||
- Display current machine state (ON/OFF)
|
||||
- Show correlation between machine state and recording status
|
||||
|
||||
**Camera-Machine Mapping:**
|
||||
- Camera 1 → Vibratory Conveyor (conveyor/cracker cam)
|
||||
- Camera 2 → Blower Separator (blower separator)
|
||||
|
||||
### 4. Auto-Recording Dashboard
|
||||
|
||||
**Create New Dashboard Section:**
|
||||
- Overall auto-recording system status
|
||||
- List of cameras with auto-recording enabled
|
||||
- Active retry queue display
|
||||
- Recent auto-recording events/logs
|
||||
|
||||
## 🔧 Implementation Steps for React App
|
||||
|
||||
### Step 1: Update TypeScript Interfaces
|
||||
```typescript
|
||||
// Update existing interfaces in your types file
|
||||
// Add new interfaces for auto-recording responses
|
||||
```
|
||||
|
||||
### Step 2: Update API Service Functions
|
||||
```typescript
|
||||
// Add new API calls
|
||||
export const enableAutoRecording = (cameraName: string) =>
|
||||
fetch(`/cameras/${cameraName}/auto-recording/enable`, { method: 'POST' });
|
||||
|
||||
export const disableAutoRecording = (cameraName: string) =>
|
||||
fetch(`/cameras/${cameraName}/auto-recording/disable`, { method: 'POST' });
|
||||
|
||||
export const getAutoRecordingStatus = () =>
|
||||
fetch('/auto-recording/status').then(res => res.json());
|
||||
```
|
||||
|
||||
### Step 3: Update Camera Components
|
||||
- Add auto-recording status indicators
|
||||
- Add enable/disable controls
|
||||
- Update recording status display to distinguish auto vs manual
|
||||
|
||||
### Step 4: Create Auto-Recording Management Panel
|
||||
- System-wide auto-recording status
|
||||
- Per-camera auto-recording controls
|
||||
- Retry queue monitoring
|
||||
- Error reporting and alerts
|
||||
|
||||
### Step 5: Update State Management
|
||||
```typescript
|
||||
// Add auto-recording state to your store/context
|
||||
interface AppState {
|
||||
cameras: CameraStatusResponse[];
|
||||
autoRecordingStatus: AutoRecordingStatusResponse;
|
||||
// ... existing state
|
||||
}
|
||||
```
|
||||
|
||||
## 🎯 Key User Experience Considerations
|
||||
|
||||
### Visual Indicators
|
||||
1. **Recording Status Hierarchy:**
|
||||
- Manual Recording (highest priority - red/prominent)
|
||||
- Auto-Recording Active (green/secondary)
|
||||
- Auto-Recording Enabled but Inactive (blue/subtle)
|
||||
- Auto-Recording Disabled (gray/muted)
|
||||
|
||||
2. **Machine State Correlation:**
|
||||
- Show machine ON/OFF status next to camera
|
||||
- Indicate when auto-recording should be active
|
||||
- Alert if machine is ON but auto-recording failed
|
||||
|
||||
3. **Error Handling:**
|
||||
- Clear error messages for auto-recording failures
|
||||
- Retry count display
|
||||
- Last attempt timestamp
|
||||
- Quick retry/reset options
|
||||
|
||||
### User Controls
|
||||
1. **Quick Actions:**
|
||||
- Toggle auto-recording per camera
|
||||
- Force retry failed auto-recording
|
||||
- Override auto-recording (manual control)
|
||||
|
||||
2. **Configuration:**
|
||||
- Adjust retry settings
|
||||
- Change machine-camera mappings
|
||||
- Set recording parameters for auto-recording
|
||||
|
||||
## 🚨 Important Notes
|
||||
|
||||
### Behavior Rules
|
||||
1. **Manual Override:** Manual recording always takes precedence over auto-recording
|
||||
2. **Non-Blocking:** Auto-recording status checks don't interfere with camera operation
|
||||
3. **Machine Correlation:** Auto-recording only activates when the associated machine turns ON
|
||||
4. **Failure Handling:** Failed auto-recording attempts are retried automatically with exponential backoff
|
||||
|
||||
### API Polling Recommendations
|
||||
- Poll camera status every 2-3 seconds for real-time updates
|
||||
- Poll auto-recording status every 5-10 seconds
|
||||
- Use WebSocket connections if available for real-time machine state updates
|
||||
|
||||
## 📱 Mobile Considerations
|
||||
- Auto-recording controls should be easily accessible on mobile
|
||||
- Status indicators should be clear and readable on small screens
|
||||
- Consider collapsible sections for detailed auto-recording information
|
||||
|
||||
## 🔍 Testing Checklist
|
||||
- [ ] Auto-recording toggle works for each camera
|
||||
- [ ] Status updates reflect machine state changes
|
||||
- [ ] Error states are clearly displayed
|
||||
- [ ] Manual recording overrides auto-recording
|
||||
- [ ] Retry mechanism is visible to users
|
||||
- [ ] Mobile interface is functional
|
||||
|
||||
This guide provides everything needed to update the React app to fully support the new auto-recording feature!
|
||||
455
API Documentations/CAMERA_CONFIG_API.md
Normal file
455
API Documentations/CAMERA_CONFIG_API.md
Normal file
@@ -0,0 +1,455 @@
|
||||
# 🎛️ Camera Configuration API Guide
|
||||
|
||||
This guide explains how to configure camera settings via API endpoints, including all the advanced settings from your config.json.
|
||||
|
||||
## 📋 Configuration Categories
|
||||
|
||||
### ✅ **Real-time Configurable (No Restart Required)**
|
||||
|
||||
These settings can be changed while the camera is active:
|
||||
|
||||
- **Basic**: `exposure_ms`, `gain`, `target_fps`
|
||||
- **Image Quality**: `sharpness`, `contrast`, `saturation`, `gamma`
|
||||
- **Color**: `auto_white_balance`, `color_temperature_preset`
|
||||
- **Advanced**: `anti_flicker_enabled`, `light_frequency`
|
||||
- **HDR**: `hdr_enabled`, `hdr_gain_mode`
|
||||
|
||||
### ⚠️ **Restart Required**
|
||||
|
||||
These settings require camera restart to take effect:
|
||||
|
||||
- **Noise Reduction**: `noise_filter_enabled`, `denoise_3d_enabled`
|
||||
- **System**: `machine_topic`, `storage_path`, `enabled`, `bit_depth`
|
||||
|
||||
### 🤖 **Auto-Recording**
|
||||
|
||||
- **Auto-Recording**: `auto_record_on_machine_start` - When enabled, the camera automatically starts recording when MQTT messages indicate the associated machine turns on, and stops recording when it turns off
|
||||
|
||||
## 🔌 API Endpoints
|
||||
|
||||
### 1. Get Camera Configuration
|
||||
|
||||
```http
|
||||
GET /cameras/{camera_name}/config
|
||||
```
|
||||
|
||||
**Response:**
|
||||
|
||||
```json
|
||||
{
|
||||
"name": "camera1",
|
||||
"machine_topic": "vibratory_conveyor",
|
||||
"storage_path": "/storage/camera1",
|
||||
"enabled": true,
|
||||
"auto_record_on_machine_start": false,
|
||||
"exposure_ms": 1.0,
|
||||
"gain": 3.5,
|
||||
"target_fps": 0,
|
||||
"sharpness": 120,
|
||||
"contrast": 110,
|
||||
"saturation": 100,
|
||||
"gamma": 100,
|
||||
"noise_filter_enabled": true,
|
||||
"denoise_3d_enabled": false,
|
||||
"auto_white_balance": true,
|
||||
"color_temperature_preset": 0,
|
||||
"anti_flicker_enabled": true,
|
||||
"light_frequency": 1,
|
||||
"bit_depth": 8,
|
||||
"hdr_enabled": false,
|
||||
"hdr_gain_mode": 0
|
||||
}
|
||||
```
|
||||
|
||||
### 2. Update Camera Configuration
|
||||
|
||||
```http
|
||||
PUT /cameras/{camera_name}/config
|
||||
Content-Type: application/json
|
||||
```
|
||||
|
||||
**Request Body (all fields optional):**
|
||||
|
||||
```json
|
||||
{
|
||||
"auto_record_on_machine_start": true,
|
||||
"exposure_ms": 2.0,
|
||||
"gain": 4.0,
|
||||
"target_fps": 10.0,
|
||||
"sharpness": 150,
|
||||
"contrast": 120,
|
||||
"saturation": 110,
|
||||
"gamma": 90,
|
||||
"noise_filter_enabled": true,
|
||||
"denoise_3d_enabled": false,
|
||||
"auto_white_balance": false,
|
||||
"color_temperature_preset": 1,
|
||||
"anti_flicker_enabled": true,
|
||||
"light_frequency": 1,
|
||||
"hdr_enabled": false,
|
||||
"hdr_gain_mode": 0
|
||||
}
|
||||
```
|
||||
|
||||
**Response:**
|
||||
|
||||
```json
|
||||
{
|
||||
"success": true,
|
||||
"message": "Camera camera1 configuration updated",
|
||||
"updated_settings": ["exposure_ms", "gain", "sharpness"]
|
||||
}
|
||||
```
|
||||
|
||||
### 3. Apply Configuration (Restart Camera)
|
||||
|
||||
```http
|
||||
POST /cameras/{camera_name}/apply-config
|
||||
```
|
||||
|
||||
**Response:**
|
||||
|
||||
```json
|
||||
{
|
||||
"success": true,
|
||||
"message": "Configuration applied to camera camera1"
|
||||
}
|
||||
```
|
||||
|
||||
## 📊 Setting Ranges and Descriptions
|
||||
|
||||
### Basic Settings
|
||||
|
||||
| Setting | Range | Default | Description |
|
||||
|---------|-------|---------|-------------|
|
||||
| `exposure_ms` | 0.1 - 1000.0 | 1.0 | Exposure time in milliseconds |
|
||||
| `gain` | 0.0 - 20.0 | 3.5 | Camera gain multiplier |
|
||||
| `target_fps` | 0.0 - 120.0 | 0 | Target FPS (0 = maximum) |
|
||||
|
||||
### Image Quality Settings
|
||||
|
||||
| Setting | Range | Default | Description |
|
||||
|---------|-------|---------|-------------|
|
||||
| `sharpness` | 0 - 200 | 100 | Image sharpness (100 = no sharpening) |
|
||||
| `contrast` | 0 - 200 | 100 | Image contrast (100 = normal) |
|
||||
| `saturation` | 0 - 200 | 100 | Color saturation (color cameras only) |
|
||||
| `gamma` | 0 - 300 | 100 | Gamma correction (100 = normal) |
|
||||
|
||||
### Color Settings
|
||||
|
||||
| Setting | Values | Default | Description |
|
||||
|---------|--------|---------|-------------|
|
||||
| `auto_white_balance` | true/false | true | Automatic white balance |
|
||||
| `color_temperature_preset` | 0-10 | 0 | Color temperature preset (0=auto) |
|
||||
|
||||
### Advanced Settings
|
||||
|
||||
| Setting | Values | Default | Description |
|
||||
|---------|--------|---------|-------------|
|
||||
| `anti_flicker_enabled` | true/false | true | Reduce artificial lighting flicker |
|
||||
| `light_frequency` | 0/1 | 1 | Light frequency (0=50Hz, 1=60Hz) |
|
||||
| `noise_filter_enabled` | true/false | true | Basic noise filtering |
|
||||
| `denoise_3d_enabled` | true/false | false | Advanced 3D denoising |
|
||||
|
||||
### HDR Settings
|
||||
|
||||
| Setting | Values | Default | Description |
|
||||
|---------|--------|---------|-------------|
|
||||
| `hdr_enabled` | true/false | false | High Dynamic Range |
|
||||
| `hdr_gain_mode` | 0-3 | 0 | HDR processing mode |
|
||||
|
||||
## 🚀 Usage Examples
|
||||
|
||||
### Example 1: Adjust Exposure and Gain
|
||||
|
||||
```bash
|
||||
curl -X PUT http://localhost:8000/cameras/camera1/config \
|
||||
-H "Content-Type: application/json" \
|
||||
-d '{
|
||||
"exposure_ms": 1.5,
|
||||
"gain": 4.0
|
||||
}'
|
||||
```
|
||||
|
||||
### Example 2: Improve Image Quality
|
||||
|
||||
```bash
|
||||
curl -X PUT http://localhost:8000/cameras/camera1/config \
|
||||
-H "Content-Type: application/json" \
|
||||
-d '{
|
||||
"sharpness": 150,
|
||||
"contrast": 120,
|
||||
"gamma": 90
|
||||
}'
|
||||
```
|
||||
|
||||
### Example 3: Configure for Indoor Lighting
|
||||
|
||||
```bash
|
||||
curl -X PUT http://localhost:8000/cameras/camera1/config \
|
||||
-H "Content-Type: application/json" \
|
||||
-d '{
|
||||
"anti_flicker_enabled": true,
|
||||
"light_frequency": 1,
|
||||
"auto_white_balance": false,
|
||||
"color_temperature_preset": 2
|
||||
}'
|
||||
```
|
||||
|
||||
### Example 4: Enable HDR Mode
|
||||
|
||||
```bash
|
||||
curl -X PUT http://localhost:8000/cameras/camera1/config \
|
||||
-H "Content-Type: application/json" \
|
||||
-d '{
|
||||
"hdr_enabled": true,
|
||||
"hdr_gain_mode": 1
|
||||
}'
|
||||
```
|
||||
|
||||
## ⚛️ React Integration Examples
|
||||
|
||||
### Camera Configuration Component
|
||||
|
||||
```jsx
|
||||
import React, { useState, useEffect } from 'react';
|
||||
|
||||
const CameraConfig = ({ cameraName, apiBaseUrl = 'http://localhost:8000' }) => {
|
||||
const [config, setConfig] = useState(null);
|
||||
const [loading, setLoading] = useState(false);
|
||||
const [error, setError] = useState(null);
|
||||
|
||||
// Load current configuration
|
||||
useEffect(() => {
|
||||
fetchConfig();
|
||||
}, [cameraName]);
|
||||
|
||||
const fetchConfig = async () => {
|
||||
try {
|
||||
const response = await fetch(`${apiBaseUrl}/cameras/${cameraName}/config`);
|
||||
if (response.ok) {
|
||||
const data = await response.json();
|
||||
setConfig(data);
|
||||
} else {
|
||||
setError('Failed to load configuration');
|
||||
}
|
||||
} catch (err) {
|
||||
setError(`Error: ${err.message}`);
|
||||
}
|
||||
};
|
||||
|
||||
const updateConfig = async (updates) => {
|
||||
setLoading(true);
|
||||
try {
|
||||
const response = await fetch(`${apiBaseUrl}/cameras/${cameraName}/config`, {
|
||||
method: 'PUT',
|
||||
headers: { 'Content-Type': 'application/json' },
|
||||
body: JSON.stringify(updates)
|
||||
});
|
||||
|
||||
if (response.ok) {
|
||||
const result = await response.json();
|
||||
console.log('Updated settings:', result.updated_settings);
|
||||
await fetchConfig(); // Reload configuration
|
||||
} else {
|
||||
const error = await response.json();
|
||||
setError(error.detail || 'Update failed');
|
||||
}
|
||||
} catch (err) {
|
||||
setError(`Error: ${err.message}`);
|
||||
} finally {
|
||||
setLoading(false);
|
||||
}
|
||||
};
|
||||
|
||||
const handleSliderChange = (setting, value) => {
|
||||
updateConfig({ [setting]: value });
|
||||
};
|
||||
|
||||
if (!config) return <div>Loading configuration...</div>;
|
||||
|
||||
return (
|
||||
<div className="camera-config">
|
||||
<h3>Camera Configuration: {cameraName}</h3>
|
||||
|
||||
{/* Basic Settings */}
|
||||
<div className="config-section">
|
||||
<h4>Basic Settings</h4>
|
||||
|
||||
<div className="setting">
|
||||
<label>Exposure (ms): {config.exposure_ms}</label>
|
||||
<input
|
||||
type="range"
|
||||
min="0.1"
|
||||
max="10"
|
||||
step="0.1"
|
||||
value={config.exposure_ms}
|
||||
onChange={(e) => handleSliderChange('exposure_ms', parseFloat(e.target.value))}
|
||||
/>
|
||||
</div>
|
||||
|
||||
<div className="setting">
|
||||
<label>Gain: {config.gain}</label>
|
||||
<input
|
||||
type="range"
|
||||
min="0"
|
||||
max="10"
|
||||
step="0.1"
|
||||
value={config.gain}
|
||||
onChange={(e) => handleSliderChange('gain', parseFloat(e.target.value))}
|
||||
/>
|
||||
</div>
|
||||
|
||||
<div className="setting">
|
||||
<label>Target FPS: {config.target_fps}</label>
|
||||
<input
|
||||
type="range"
|
||||
min="0"
|
||||
max="30"
|
||||
step="1"
|
||||
value={config.target_fps}
|
||||
onChange={(e) => handleSliderChange('target_fps', parseInt(e.target.value))}
|
||||
/>
|
||||
</div>
|
||||
</div>
|
||||
|
||||
{/* Image Quality Settings */}
|
||||
<div className="config-section">
|
||||
<h4>Image Quality</h4>
|
||||
|
||||
<div className="setting">
|
||||
<label>Sharpness: {config.sharpness}</label>
|
||||
<input
|
||||
type="range"
|
||||
min="0"
|
||||
max="200"
|
||||
value={config.sharpness}
|
||||
onChange={(e) => handleSliderChange('sharpness', parseInt(e.target.value))}
|
||||
/>
|
||||
</div>
|
||||
|
||||
<div className="setting">
|
||||
<label>Contrast: {config.contrast}</label>
|
||||
<input
|
||||
type="range"
|
||||
min="0"
|
||||
max="200"
|
||||
value={config.contrast}
|
||||
onChange={(e) => handleSliderChange('contrast', parseInt(e.target.value))}
|
||||
/>
|
||||
</div>
|
||||
|
||||
<div className="setting">
|
||||
<label>Gamma: {config.gamma}</label>
|
||||
<input
|
||||
type="range"
|
||||
min="0"
|
||||
max="300"
|
||||
value={config.gamma}
|
||||
onChange={(e) => handleSliderChange('gamma', parseInt(e.target.value))}
|
||||
/>
|
||||
</div>
|
||||
</div>
|
||||
|
||||
{/* Advanced Settings */}
|
||||
<div className="config-section">
|
||||
<h4>Advanced Settings</h4>
|
||||
|
||||
<div className="setting">
|
||||
<label>
|
||||
<input
|
||||
type="checkbox"
|
||||
checked={config.anti_flicker_enabled}
|
||||
onChange={(e) => updateConfig({ anti_flicker_enabled: e.target.checked })}
|
||||
/>
|
||||
Anti-flicker Enabled
|
||||
</label>
|
||||
</div>
|
||||
|
||||
<div className="setting">
|
||||
<label>
|
||||
<input
|
||||
type="checkbox"
|
||||
checked={config.auto_white_balance}
|
||||
onChange={(e) => updateConfig({ auto_white_balance: e.target.checked })}
|
||||
/>
|
||||
Auto White Balance
|
||||
</label>
|
||||
</div>
|
||||
|
||||
<div className="setting">
|
||||
<label>
|
||||
<input
|
||||
type="checkbox"
|
||||
checked={config.hdr_enabled}
|
||||
onChange={(e) => updateConfig({ hdr_enabled: e.target.checked })}
|
||||
/>
|
||||
HDR Enabled
|
||||
</label>
|
||||
</div>
|
||||
</div>
|
||||
|
||||
{error && (
|
||||
<div className="error" style={{ color: 'red', marginTop: '10px' }}>
|
||||
{error}
|
||||
</div>
|
||||
)}
|
||||
|
||||
{loading && <div>Updating configuration...</div>}
|
||||
</div>
|
||||
);
|
||||
};
|
||||
|
||||
export default CameraConfig;
|
||||
```
|
||||
|
||||
## 🔄 Configuration Workflow
|
||||
|
||||
### 1. Real-time Adjustments
|
||||
|
||||
For settings that don't require restart:
|
||||
|
||||
```bash
|
||||
# Update settings
|
||||
curl -X PUT /cameras/camera1/config -d '{"exposure_ms": 2.0}'
|
||||
|
||||
# Settings take effect immediately
|
||||
# Continue recording/streaming without interruption
|
||||
```
|
||||
|
||||
### 2. Settings Requiring Restart
|
||||
|
||||
For noise reduction and system settings:
|
||||
|
||||
```bash
|
||||
# Update settings
|
||||
curl -X PUT /cameras/camera1/config -d '{"noise_filter_enabled": false}'
|
||||
|
||||
# Apply configuration (restarts camera)
|
||||
curl -X POST /cameras/camera1/apply-config
|
||||
|
||||
# Camera reinitializes with new settings
|
||||
```
|
||||
|
||||
## 🚨 Important Notes
|
||||
|
||||
### Camera State During Updates
|
||||
|
||||
- **Real-time settings**: Applied immediately, no interruption
|
||||
- **Restart-required settings**: Saved to config, applied on next restart
|
||||
- **Recording**: Continues during real-time updates
|
||||
- **Streaming**: Continues during real-time updates
|
||||
|
||||
### Error Handling
|
||||
|
||||
- Invalid ranges return HTTP 422 with validation errors
|
||||
- Camera not found returns HTTP 404
|
||||
- SDK errors are logged and return HTTP 500
|
||||
|
||||
### Performance Impact
|
||||
|
||||
- **Image quality settings**: Minimal performance impact
|
||||
- **Noise reduction**: May reduce FPS when enabled
|
||||
- **HDR**: Significant processing overhead when enabled
|
||||
|
||||
This comprehensive API allows you to control all camera settings programmatically, making it perfect for integration with React dashboards or automated optimization systems!
|
||||
870
API Documentations/README.md
Normal file
870
API Documentations/README.md
Normal file
@@ -0,0 +1,870 @@
|
||||
# USDA Vision Camera System
|
||||
|
||||
A comprehensive system for monitoring machines via MQTT and automatically recording video from GigE cameras when machines are active. Designed for Atlanta, Georgia operations with proper timezone synchronization.
|
||||
|
||||
## 🎯 Overview
|
||||
|
||||
This system integrates MQTT machine monitoring with automated video recording from GigE cameras. When a machine turns on (detected via MQTT), the system automatically starts recording from the associated camera. When the machine turns off, recording stops and the video is saved with an Atlanta timezone timestamp.
|
||||
|
||||
### Key Features
|
||||
|
||||
- **🔄 MQTT Integration**: Listens to multiple machine state topics
|
||||
- **📹 Automatic Recording**: Starts/stops recording based on machine states
|
||||
- **📷 GigE Camera Support**: Uses camera SDK library (mvsdk) for camera control
|
||||
- **⚡ Multi-threading**: Concurrent MQTT listening, camera monitoring, and recording
|
||||
- **🌐 REST API**: FastAPI server for dashboard integration
|
||||
- **📡 WebSocket Support**: Real-time status updates
|
||||
- **💾 Storage Management**: Organized file storage with cleanup capabilities
|
||||
- **📝 Comprehensive Logging**: Detailed logging with rotation and error tracking
|
||||
- **⚙️ Configuration Management**: JSON-based configuration system
|
||||
- **🕐 Timezone Sync**: Proper time synchronization for Atlanta, Georgia
|
||||
|
||||
## 📁 Project Structure
|
||||
|
||||
```
|
||||
USDA-Vision-Cameras/
|
||||
├── README.md # Main documentation (this file)
|
||||
├── main.py # System entry point
|
||||
├── config.json # System configuration
|
||||
├── requirements.txt # Python dependencies
|
||||
├── pyproject.toml # UV package configuration
|
||||
├── start_system.sh # Startup script
|
||||
├── setup_timezone.sh # Time sync setup
|
||||
├── camera_preview.html # Web camera preview interface
|
||||
├── usda_vision_system/ # Main application
|
||||
│ ├── core/ # Core functionality
|
||||
│ ├── mqtt/ # MQTT integration
|
||||
│ ├── camera/ # Camera management
|
||||
│ ├── storage/ # File management
|
||||
│ ├── api/ # REST API server
|
||||
│ └── main.py # Application coordinator
|
||||
├── camera_sdk/ # GigE camera SDK library
|
||||
├── tests/ # Organized test files
|
||||
│ ├── api/ # API-related tests
|
||||
│ ├── camera/ # Camera functionality tests
|
||||
│ ├── core/ # Core system tests
|
||||
│ ├── mqtt/ # MQTT integration tests
|
||||
│ ├── recording/ # Recording feature tests
|
||||
│ ├── storage/ # Storage management tests
|
||||
│ ├── integration/ # System integration tests
|
||||
│ └── legacy_tests/ # Archived development files
|
||||
├── docs/ # Organized documentation
|
||||
│ ├── api/ # API documentation
|
||||
│ ├── features/ # Feature-specific guides
|
||||
│ ├── guides/ # User and setup guides
|
||||
│ └── legacy/ # Legacy documentation
|
||||
├── ai_agent/ # AI agent resources
|
||||
│ ├── guides/ # AI-specific instructions
|
||||
│ ├── examples/ # Demo scripts and notebooks
|
||||
│ └── references/ # API references and types
|
||||
├── Camera/ # Camera data directory
|
||||
└── storage/ # Recording storage (created at runtime)
|
||||
├── camera1/ # Camera 1 recordings
|
||||
└── camera2/ # Camera 2 recordings
|
||||
```
|
||||
|
||||
## 🏗️ Architecture
|
||||
|
||||
```
|
||||
┌─────────────────┐ ┌─────────────────┐ ┌─────────────────┐
|
||||
│ MQTT Broker │ │ GigE Camera │ │ Dashboard │
|
||||
│ │ │ │ │ (React) │
|
||||
└─────────┬───────┘ └─────────┬───────┘ └─────────┬───────┘
|
||||
│ │ │
|
||||
│ Machine States │ Video Streams │ API Calls
|
||||
│ │ │
|
||||
┌─────────▼──────────────────────▼──────────────────────▼───────┐
|
||||
│ USDA Vision Camera System │
|
||||
├───────────────────────────────────────────────────────────────┤
|
||||
│ ┌─────────────┐ ┌─────────────┐ ┌─────────────┐ │
|
||||
│ │ MQTT Client │ │ Camera │ │ API Server │ │
|
||||
│ │ │ │ Manager │ │ │ │
|
||||
│ └─────────────┘ └─────────────┘ └─────────────┘ │
|
||||
│ ┌─────────────┐ ┌─────────────┐ ┌─────────────┐ │
|
||||
│ │ State │ │ Storage │ │ Event │ │
|
||||
│ │ Manager │ │ Manager │ │ System │ │
|
||||
│ └─────────────┘ └─────────────┘ └─────────────┘ │
|
||||
└───────────────────────────────────────────────────────────────┘
|
||||
```
|
||||
|
||||
## 📋 Prerequisites
|
||||
|
||||
### Hardware Requirements
|
||||
- GigE cameras compatible with camera SDK library
|
||||
- Network connection to MQTT broker
|
||||
- Sufficient storage space for video recordings
|
||||
|
||||
### Software Requirements
|
||||
- **Python 3.11+**
|
||||
- **uv package manager** (recommended) or pip
|
||||
- **MQTT broker** (e.g., Mosquitto, Home Assistant)
|
||||
- **Linux system** (tested on Ubuntu/Debian)
|
||||
|
||||
### Network Requirements
|
||||
- Access to MQTT broker
|
||||
- GigE cameras on network
|
||||
- Internet access for time synchronization (optional but recommended)
|
||||
|
||||
## 🚀 Installation
|
||||
|
||||
### 1. Clone the Repository
|
||||
```bash
|
||||
git clone https://github.com/your-username/USDA-Vision-Cameras.git
|
||||
cd USDA-Vision-Cameras
|
||||
```
|
||||
|
||||
### 2. Install Dependencies
|
||||
Using uv (recommended):
|
||||
```bash
|
||||
# Install uv if not already installed
|
||||
curl -LsSf https://astral.sh/uv/install.sh | sh
|
||||
|
||||
# Install dependencies
|
||||
uv sync
|
||||
```
|
||||
|
||||
Using pip:
|
||||
```bash
|
||||
# Create virtual environment
|
||||
python -m venv .venv
|
||||
source .venv/bin/activate # On Windows: .venv\Scripts\activate
|
||||
|
||||
# Install dependencies
|
||||
pip install -r requirements.txt
|
||||
```
|
||||
|
||||
### 3. Setup GigE Camera Library
|
||||
Ensure the `camera_sdk` directory contains the mvsdk library for your GigE cameras. This should include:
|
||||
- `mvsdk.py` - Python SDK wrapper
|
||||
- Camera driver libraries
|
||||
- Any camera-specific configuration files
|
||||
|
||||
### 4. Configure Storage Directory
|
||||
```bash
|
||||
# Create storage directory (adjust path as needed)
|
||||
mkdir -p ./storage
|
||||
# Or for system-wide storage:
|
||||
# sudo mkdir -p /storage && sudo chown $USER:$USER /storage
|
||||
```
|
||||
|
||||
### 5. Setup Time Synchronization (Recommended)
|
||||
```bash
|
||||
# Run timezone setup for Atlanta, Georgia
|
||||
./setup_timezone.sh
|
||||
```
|
||||
|
||||
### 6. Configure the System
|
||||
Edit `config.json` to match your setup:
|
||||
```json
|
||||
{
|
||||
"mqtt": {
|
||||
"broker_host": "192.168.1.110",
|
||||
"broker_port": 1883,
|
||||
"topics": {
|
||||
"machine1": "vision/machine1/state",
|
||||
"machine2": "vision/machine2/state"
|
||||
}
|
||||
},
|
||||
"cameras": [
|
||||
{
|
||||
"name": "camera1",
|
||||
"machine_topic": "machine1",
|
||||
"storage_path": "./storage/camera1",
|
||||
"enabled": true
|
||||
}
|
||||
]
|
||||
}
|
||||
```
|
||||
|
||||
## 🔧 Configuration
|
||||
|
||||
### MQTT Configuration
|
||||
```json
|
||||
{
|
||||
"mqtt": {
|
||||
"broker_host": "192.168.1.110",
|
||||
"broker_port": 1883,
|
||||
"username": null,
|
||||
"password": null,
|
||||
"topics": {
|
||||
"vibratory_conveyor": "vision/vibratory_conveyor/state",
|
||||
"blower_separator": "vision/blower_separator/state"
|
||||
}
|
||||
}
|
||||
}
|
||||
```
|
||||
|
||||
### Camera Configuration
|
||||
```json
|
||||
{
|
||||
"cameras": [
|
||||
{
|
||||
"name": "camera1",
|
||||
"machine_topic": "vibratory_conveyor",
|
||||
"storage_path": "./storage/camera1",
|
||||
"exposure_ms": 1.0,
|
||||
"gain": 3.5,
|
||||
"target_fps": 3.0,
|
||||
"enabled": true
|
||||
}
|
||||
]
|
||||
}
|
||||
```
|
||||
|
||||
### System Configuration
|
||||
```json
|
||||
{
|
||||
"system": {
|
||||
"camera_check_interval_seconds": 2,
|
||||
"log_level": "INFO",
|
||||
"api_host": "0.0.0.0",
|
||||
"api_port": 8000,
|
||||
"enable_api": true,
|
||||
"timezone": "America/New_York"
|
||||
}
|
||||
}
|
||||
```
|
||||
|
||||
## 🎮 Usage
|
||||
|
||||
### Quick Start
|
||||
```bash
|
||||
# Test the system
|
||||
python test_system.py
|
||||
|
||||
# Start the system
|
||||
python main.py
|
||||
|
||||
# Or use the startup script
|
||||
./start_system.sh
|
||||
```
|
||||
|
||||
### Command Line Options
|
||||
```bash
|
||||
# Custom configuration file
|
||||
python main.py --config my_config.json
|
||||
|
||||
# Debug mode
|
||||
python main.py --log-level DEBUG
|
||||
|
||||
# Help
|
||||
python main.py --help
|
||||
```
|
||||
|
||||
### Verify Installation
|
||||
```bash
|
||||
# Run system tests
|
||||
python test_system.py
|
||||
|
||||
# Check time synchronization
|
||||
python check_time.py
|
||||
|
||||
# Test timezone functions
|
||||
python test_timezone.py
|
||||
```
|
||||
|
||||
## 🌐 API Usage
|
||||
|
||||
The system provides a comprehensive REST API for monitoring and control.
|
||||
|
||||
> **📚 Complete API Documentation**: See [docs/API_DOCUMENTATION.md](docs/API_DOCUMENTATION.md) for the full API reference including all endpoints, request/response models, examples, and recent enhancements.
|
||||
>
|
||||
> **⚡ Quick Reference**: See [docs/API_QUICK_REFERENCE.md](docs/API_QUICK_REFERENCE.md) for commonly used endpoints with curl examples.
|
||||
|
||||
### Starting the API Server
|
||||
The API server starts automatically with the main system on port 8000:
|
||||
```bash
|
||||
python main.py
|
||||
# API available at: http://localhost:8000
|
||||
```
|
||||
|
||||
### 🚀 New API Features
|
||||
|
||||
#### Enhanced Recording Control
|
||||
- **Dynamic camera settings**: Set exposure, gain, FPS per recording
|
||||
- **Automatic datetime prefixes**: All filenames get timestamp prefixes
|
||||
- **Auto-recording management**: Enable/disable per camera via API
|
||||
|
||||
#### Advanced Camera Configuration
|
||||
- **Real-time settings**: Update image quality without restart
|
||||
- **Live streaming**: MJPEG streams for web integration
|
||||
- **Recovery operations**: Reconnect, reset, reinitialize cameras
|
||||
|
||||
#### Comprehensive Monitoring
|
||||
- **MQTT event history**: Track machine state changes
|
||||
- **Storage statistics**: Monitor disk usage and file counts
|
||||
- **WebSocket updates**: Real-time system notifications
|
||||
|
||||
### Core Endpoints
|
||||
|
||||
#### System Status
|
||||
```bash
|
||||
# Get overall system status
|
||||
curl http://localhost:8000/system/status
|
||||
|
||||
# Response example:
|
||||
{
|
||||
"system_started": true,
|
||||
"mqtt_connected": true,
|
||||
"machines": {
|
||||
"vibratory_conveyor": {"state": "on", "last_updated": "2025-07-25T21:30:00-04:00"}
|
||||
},
|
||||
"cameras": {
|
||||
"camera1": {"status": "available", "is_recording": true}
|
||||
},
|
||||
"active_recordings": 1,
|
||||
"uptime_seconds": 3600
|
||||
}
|
||||
```
|
||||
|
||||
#### Machine Status
|
||||
```bash
|
||||
# Get all machine states
|
||||
curl http://localhost:8000/machines
|
||||
|
||||
# Response example:
|
||||
{
|
||||
"vibratory_conveyor": {
|
||||
"name": "vibratory_conveyor",
|
||||
"state": "on",
|
||||
"last_updated": "2025-07-25T21:30:00-04:00",
|
||||
"mqtt_topic": "vision/vibratory_conveyor/state"
|
||||
}
|
||||
}
|
||||
```
|
||||
|
||||
#### Camera Status
|
||||
```bash
|
||||
# Get all camera statuses
|
||||
curl http://localhost:8000/cameras
|
||||
|
||||
# Get specific camera status
|
||||
curl http://localhost:8000/cameras/camera1
|
||||
|
||||
# Response example:
|
||||
{
|
||||
"name": "camera1",
|
||||
"status": "available",
|
||||
"is_recording": false,
|
||||
"last_checked": "2025-07-25T21:30:00-04:00",
|
||||
"device_info": {
|
||||
"friendly_name": "Blower-Yield-Cam",
|
||||
"serial_number": "054012620023"
|
||||
}
|
||||
}
|
||||
```
|
||||
|
||||
#### Manual Recording Control
|
||||
```bash
|
||||
# Start recording manually
|
||||
curl -X POST http://localhost:8000/cameras/camera1/start-recording \
|
||||
-H "Content-Type: application/json" \
|
||||
-d '{"camera_name": "camera1", "filename": "manual_test.avi"}'
|
||||
|
||||
# Stop recording manually
|
||||
curl -X POST http://localhost:8000/cameras/camera1/stop-recording
|
||||
|
||||
# Response example:
|
||||
{
|
||||
"success": true,
|
||||
"message": "Recording started for camera1",
|
||||
"filename": "camera1_manual_20250725_213000.avi"
|
||||
}
|
||||
```
|
||||
|
||||
#### Storage Management
|
||||
```bash
|
||||
# Get storage statistics
|
||||
curl http://localhost:8000/storage/stats
|
||||
|
||||
# Get recording files list
|
||||
curl -X POST http://localhost:8000/storage/files \
|
||||
-H "Content-Type: application/json" \
|
||||
-d '{"camera_name": "camera1", "limit": 10}'
|
||||
|
||||
# Cleanup old files
|
||||
curl -X POST http://localhost:8000/storage/cleanup \
|
||||
-H "Content-Type: application/json" \
|
||||
-d '{"max_age_days": 30}'
|
||||
```
|
||||
|
||||
### WebSocket Real-time Updates
|
||||
```javascript
|
||||
// Connect to WebSocket for real-time updates
|
||||
const ws = new WebSocket('ws://localhost:8000/ws');
|
||||
|
||||
ws.onmessage = function(event) {
|
||||
const update = JSON.parse(event.data);
|
||||
console.log('Real-time update:', update);
|
||||
|
||||
// Handle different event types
|
||||
if (update.event_type === 'machine_state_changed') {
|
||||
console.log(`Machine ${update.data.machine_name} is now ${update.data.state}`);
|
||||
} else if (update.event_type === 'recording_started') {
|
||||
console.log(`Recording started: ${update.data.filename}`);
|
||||
}
|
||||
};
|
||||
```
|
||||
|
||||
### Integration Examples
|
||||
|
||||
#### Python Integration
|
||||
```python
|
||||
import requests
|
||||
import json
|
||||
|
||||
# System status check
|
||||
response = requests.get('http://localhost:8000/system/status')
|
||||
status = response.json()
|
||||
print(f"System running: {status['system_started']}")
|
||||
|
||||
# Start recording
|
||||
recording_data = {"camera_name": "camera1"}
|
||||
response = requests.post(
|
||||
'http://localhost:8000/cameras/camera1/start-recording',
|
||||
headers={'Content-Type': 'application/json'},
|
||||
data=json.dumps(recording_data)
|
||||
)
|
||||
result = response.json()
|
||||
print(f"Recording started: {result['success']}")
|
||||
```
|
||||
|
||||
#### JavaScript/React Integration
|
||||
```javascript
|
||||
// React hook for system status
|
||||
import { useState, useEffect } from 'react';
|
||||
|
||||
function useSystemStatus() {
|
||||
const [status, setStatus] = useState(null);
|
||||
|
||||
useEffect(() => {
|
||||
const fetchStatus = async () => {
|
||||
try {
|
||||
const response = await fetch('http://localhost:8000/system/status');
|
||||
const data = await response.json();
|
||||
setStatus(data);
|
||||
} catch (error) {
|
||||
console.error('Failed to fetch status:', error);
|
||||
}
|
||||
};
|
||||
|
||||
fetchStatus();
|
||||
const interval = setInterval(fetchStatus, 5000); // Update every 5 seconds
|
||||
|
||||
return () => clearInterval(interval);
|
||||
}, []);
|
||||
|
||||
return status;
|
||||
}
|
||||
|
||||
// Usage in component
|
||||
function Dashboard() {
|
||||
const systemStatus = useSystemStatus();
|
||||
|
||||
return (
|
||||
<div>
|
||||
<h1>USDA Vision System</h1>
|
||||
{systemStatus && (
|
||||
<div>
|
||||
<p>Status: {systemStatus.system_started ? 'Running' : 'Stopped'}</p>
|
||||
<p>MQTT: {systemStatus.mqtt_connected ? 'Connected' : 'Disconnected'}</p>
|
||||
<p>Active Recordings: {systemStatus.active_recordings}</p>
|
||||
</div>
|
||||
)}
|
||||
</div>
|
||||
);
|
||||
}
|
||||
```
|
||||
|
||||
#### Supabase Integration
|
||||
```javascript
|
||||
// Store recording metadata in Supabase
|
||||
import { createClient } from '@supabase/supabase-js';
|
||||
|
||||
const supabase = createClient(SUPABASE_URL, SUPABASE_ANON_KEY);
|
||||
|
||||
// Function to sync recording data
|
||||
async function syncRecordingData() {
|
||||
try {
|
||||
// Get recordings from vision system
|
||||
const response = await fetch('http://localhost:8000/storage/files', {
|
||||
method: 'POST',
|
||||
headers: { 'Content-Type': 'application/json' },
|
||||
body: JSON.stringify({ limit: 100 })
|
||||
});
|
||||
const { files } = await response.json();
|
||||
|
||||
// Store in Supabase
|
||||
for (const file of files) {
|
||||
await supabase.from('recordings').upsert({
|
||||
filename: file.filename,
|
||||
camera_name: file.camera_name,
|
||||
start_time: file.start_time,
|
||||
duration_seconds: file.duration_seconds,
|
||||
file_size_bytes: file.file_size_bytes
|
||||
});
|
||||
}
|
||||
} catch (error) {
|
||||
console.error('Sync failed:', error);
|
||||
}
|
||||
}
|
||||
```
|
||||
|
||||
## 📁 File Organization
|
||||
|
||||
The system organizes recordings in a structured format:
|
||||
|
||||
```
|
||||
storage/
|
||||
├── camera1/
|
||||
│ ├── camera1_recording_20250725_213000.avi
|
||||
│ ├── camera1_recording_20250725_214500.avi
|
||||
│ └── camera1_manual_20250725_220000.avi
|
||||
├── camera2/
|
||||
│ ├── camera2_recording_20250725_213005.avi
|
||||
│ └── camera2_recording_20250725_214505.avi
|
||||
└── file_index.json
|
||||
```
|
||||
|
||||
### Filename Convention
|
||||
- **Format**: `{camera_name}_{type}_{YYYYMMDD_HHMMSS}.avi`
|
||||
- **Timezone**: Atlanta local time (EST/EDT)
|
||||
- **Examples**:
|
||||
- `camera1_recording_20250725_213000.avi` - Automatic recording
|
||||
- `camera1_manual_20250725_220000.avi` - Manual recording
|
||||
|
||||
## 🔍 Monitoring and Logging
|
||||
|
||||
### Log Files
|
||||
- **Main Log**: `usda_vision_system.log` (rotated automatically)
|
||||
- **Console Output**: Colored, real-time status updates
|
||||
- **Component Logs**: Separate log levels for different components
|
||||
|
||||
### Log Levels
|
||||
```bash
|
||||
# Debug mode (verbose)
|
||||
python main.py --log-level DEBUG
|
||||
|
||||
# Info mode (default)
|
||||
python main.py --log-level INFO
|
||||
|
||||
# Warning mode (errors and warnings only)
|
||||
python main.py --log-level WARNING
|
||||
```
|
||||
|
||||
### Performance Monitoring
|
||||
The system tracks:
|
||||
- Startup times
|
||||
- Recording session metrics
|
||||
- MQTT message processing rates
|
||||
- Camera status check intervals
|
||||
- API response times
|
||||
|
||||
### Health Checks
|
||||
```bash
|
||||
# API health check
|
||||
curl http://localhost:8000/health
|
||||
|
||||
# System status
|
||||
curl http://localhost:8000/system/status
|
||||
|
||||
# Time synchronization
|
||||
python check_time.py
|
||||
```
|
||||
|
||||
## 🚨 Troubleshooting
|
||||
|
||||
### Common Issues and Solutions
|
||||
|
||||
#### 1. Camera Not Found
|
||||
**Problem**: `Camera discovery failed` or `No cameras found`
|
||||
|
||||
**Solutions**:
|
||||
```bash
|
||||
# Check camera connections
|
||||
ping 192.168.1.165 # Replace with your camera IP
|
||||
|
||||
# Verify camera SDK library
|
||||
ls -la "camera_sdk/"
|
||||
# Should contain mvsdk.py and related files
|
||||
|
||||
# Test camera discovery manually
|
||||
python -c "
|
||||
import sys; sys.path.append('./camera_sdk')
|
||||
import mvsdk
|
||||
devices = mvsdk.CameraEnumerateDevice()
|
||||
print(f'Found {len(devices)} cameras')
|
||||
for i, dev in enumerate(devices):
|
||||
print(f'Camera {i}: {dev.GetFriendlyName()}')
|
||||
"
|
||||
|
||||
# Check camera permissions
|
||||
sudo chmod 666 /dev/video* # If using USB cameras
|
||||
```
|
||||
|
||||
#### 2. MQTT Connection Failed
|
||||
**Problem**: `MQTT connection failed` or `MQTT disconnected`
|
||||
|
||||
**Solutions**:
|
||||
```bash
|
||||
# Test MQTT broker connectivity
|
||||
ping 192.168.1.110 # Replace with your broker IP
|
||||
telnet 192.168.1.110 1883 # Test port connectivity
|
||||
|
||||
# Test MQTT manually
|
||||
mosquitto_sub -h 192.168.1.110 -t "vision/+/state" -v
|
||||
|
||||
# Check credentials in config.json
|
||||
{
|
||||
"mqtt": {
|
||||
"broker_host": "192.168.1.110",
|
||||
"broker_port": 1883,
|
||||
"username": "your_username", # Add if required
|
||||
"password": "your_password" # Add if required
|
||||
}
|
||||
}
|
||||
|
||||
# Check firewall
|
||||
sudo ufw status
|
||||
sudo ufw allow 1883 # Allow MQTT port
|
||||
```
|
||||
|
||||
#### 3. Recording Fails
|
||||
**Problem**: `Failed to start recording` or `Camera initialization failed`
|
||||
|
||||
**Solutions**:
|
||||
```bash
|
||||
# Check storage permissions
|
||||
ls -la storage/
|
||||
chmod 755 storage/
|
||||
chmod 755 storage/camera*/
|
||||
|
||||
# Check available disk space
|
||||
df -h storage/
|
||||
|
||||
# Test camera initialization
|
||||
python -c "
|
||||
import sys; sys.path.append('./camera_sdk')
|
||||
import mvsdk
|
||||
devices = mvsdk.CameraEnumerateDevice()
|
||||
if devices:
|
||||
try:
|
||||
hCamera = mvsdk.CameraInit(devices[0], -1, -1)
|
||||
print('Camera initialized successfully')
|
||||
mvsdk.CameraUnInit(hCamera)
|
||||
except Exception as e:
|
||||
print(f'Camera init failed: {e}')
|
||||
"
|
||||
|
||||
# Check if camera is busy
|
||||
lsof | grep video # Check what's using cameras
|
||||
```
|
||||
|
||||
#### 4. API Server Won't Start
|
||||
**Problem**: `Failed to start API server` or `Port already in use`
|
||||
|
||||
**Solutions**:
|
||||
```bash
|
||||
# Check if port 8000 is in use
|
||||
netstat -tlnp | grep 8000
|
||||
lsof -i :8000
|
||||
|
||||
# Kill process using port 8000
|
||||
sudo kill -9 $(lsof -t -i:8000)
|
||||
|
||||
# Use different port in config.json
|
||||
{
|
||||
"system": {
|
||||
"api_port": 8001 # Change port
|
||||
}
|
||||
}
|
||||
|
||||
# Check firewall
|
||||
sudo ufw allow 8000
|
||||
```
|
||||
|
||||
#### 5. Time Synchronization Issues
|
||||
**Problem**: `Time is NOT synchronized` or time drift warnings
|
||||
|
||||
**Solutions**:
|
||||
```bash
|
||||
# Check time sync status
|
||||
timedatectl status
|
||||
|
||||
# Force time sync
|
||||
sudo systemctl restart systemd-timesyncd
|
||||
sudo timedatectl set-ntp true
|
||||
|
||||
# Manual time sync
|
||||
sudo ntpdate -s time.nist.gov
|
||||
|
||||
# Check timezone
|
||||
timedatectl list-timezones | grep New_York
|
||||
sudo timedatectl set-timezone America/New_York
|
||||
|
||||
# Verify with system
|
||||
python check_time.py
|
||||
```
|
||||
|
||||
#### 6. Storage Issues
|
||||
**Problem**: `Permission denied` or `No space left on device`
|
||||
|
||||
**Solutions**:
|
||||
```bash
|
||||
# Check disk space
|
||||
df -h
|
||||
du -sh storage/
|
||||
|
||||
# Fix permissions
|
||||
sudo chown -R $USER:$USER storage/
|
||||
chmod -R 755 storage/
|
||||
|
||||
# Clean up old files
|
||||
python -c "
|
||||
from usda_vision_system.storage.manager import StorageManager
|
||||
from usda_vision_system.core.config import Config
|
||||
from usda_vision_system.core.state_manager import StateManager
|
||||
config = Config()
|
||||
state_manager = StateManager()
|
||||
storage = StorageManager(config, state_manager)
|
||||
result = storage.cleanup_old_files(7) # Clean files older than 7 days
|
||||
print(f'Cleaned {result[\"files_removed\"]} files')
|
||||
"
|
||||
```
|
||||
|
||||
### Debug Mode
|
||||
|
||||
Enable debug mode for detailed troubleshooting:
|
||||
```bash
|
||||
# Start with debug logging
|
||||
python main.py --log-level DEBUG
|
||||
|
||||
# Check specific component logs
|
||||
tail -f usda_vision_system.log | grep "camera"
|
||||
tail -f usda_vision_system.log | grep "mqtt"
|
||||
tail -f usda_vision_system.log | grep "ERROR"
|
||||
```
|
||||
|
||||
### System Health Check
|
||||
|
||||
Run comprehensive system diagnostics:
|
||||
```bash
|
||||
# Full system test
|
||||
python test_system.py
|
||||
|
||||
# Individual component tests
|
||||
python test_timezone.py
|
||||
python check_time.py
|
||||
|
||||
# API health check
|
||||
curl http://localhost:8000/health
|
||||
curl http://localhost:8000/system/status
|
||||
```
|
||||
|
||||
### Log Analysis
|
||||
|
||||
Common log patterns to look for:
|
||||
```bash
|
||||
# MQTT connection issues
|
||||
grep "MQTT" usda_vision_system.log | grep -E "(ERROR|WARNING)"
|
||||
|
||||
# Camera problems
|
||||
grep "camera" usda_vision_system.log | grep -E "(ERROR|failed)"
|
||||
|
||||
# Recording issues
|
||||
grep "recording" usda_vision_system.log | grep -E "(ERROR|failed)"
|
||||
|
||||
# Time sync problems
|
||||
grep -E "(time|sync)" usda_vision_system.log | grep -E "(ERROR|WARNING)"
|
||||
```
|
||||
|
||||
### Getting Help
|
||||
|
||||
If you encounter issues not covered here:
|
||||
|
||||
1. **Check Logs**: Always start with `usda_vision_system.log`
|
||||
2. **Run Tests**: Use `python test_system.py` to identify problems
|
||||
3. **Check Configuration**: Verify `config.json` settings
|
||||
4. **Test Components**: Use individual test scripts
|
||||
5. **Check Dependencies**: Ensure all required packages are installed
|
||||
|
||||
### Performance Optimization
|
||||
|
||||
For better performance:
|
||||
```bash
|
||||
# Reduce camera check interval (in config.json)
|
||||
{
|
||||
"system": {
|
||||
"camera_check_interval_seconds": 5 # Increase from 2 to 5
|
||||
}
|
||||
}
|
||||
|
||||
# Optimize recording settings
|
||||
{
|
||||
"cameras": [
|
||||
{
|
||||
"target_fps": 2.0, # Reduce FPS for smaller files
|
||||
"exposure_ms": 2.0 # Adjust exposure as needed
|
||||
}
|
||||
]
|
||||
}
|
||||
|
||||
# Enable log rotation
|
||||
{
|
||||
"system": {
|
||||
"log_level": "INFO" # Reduce from DEBUG to INFO
|
||||
}
|
||||
}
|
||||
```
|
||||
|
||||
## 🤝 Contributing
|
||||
|
||||
### Development Setup
|
||||
```bash
|
||||
# Clone repository
|
||||
git clone https://github.com/your-username/USDA-Vision-Cameras.git
|
||||
cd USDA-Vision-Cameras
|
||||
|
||||
# Install development dependencies
|
||||
uv sync --dev
|
||||
|
||||
# Run tests
|
||||
python test_system.py
|
||||
python test_timezone.py
|
||||
```
|
||||
|
||||
### Project Structure
|
||||
```
|
||||
usda_vision_system/
|
||||
├── core/ # Core functionality (config, state, events, logging)
|
||||
├── mqtt/ # MQTT client and message handlers
|
||||
├── camera/ # Camera management, monitoring, recording
|
||||
├── storage/ # File management and organization
|
||||
├── api/ # FastAPI server and WebSocket support
|
||||
└── main.py # Application coordinator
|
||||
```
|
||||
|
||||
### Adding Features
|
||||
1. **New Camera Types**: Extend `camera/recorder.py`
|
||||
2. **New MQTT Topics**: Update `config.json` and `mqtt/handlers.py`
|
||||
3. **New API Endpoints**: Add to `api/server.py`
|
||||
4. **New Events**: Define in `core/events.py`
|
||||
|
||||
## 📄 License
|
||||
|
||||
This project is developed for USDA research purposes.
|
||||
|
||||
## 🆘 Support
|
||||
|
||||
For technical support:
|
||||
1. Check the troubleshooting section above
|
||||
2. Review logs in `usda_vision_system.log`
|
||||
3. Run system diagnostics with `python test_system.py`
|
||||
4. Check API health at `http://localhost:8000/health`
|
||||
|
||||
---
|
||||
|
||||
**System Status**: ✅ **READY FOR PRODUCTION**
|
||||
**Time Sync**: ✅ **ATLANTA, GEORGIA (EDT/EST)**
|
||||
**API Server**: ✅ **http://localhost:8000**
|
||||
**Documentation**: ✅ **COMPLETE**
|
||||
@@ -16,7 +16,7 @@ export interface ApiConfig {
|
||||
}
|
||||
|
||||
export const defaultApiConfig: ApiConfig = {
|
||||
baseUrl: 'http://localhost:8000',
|
||||
baseUrl: 'http://vision:8000', // Production default, change to 'http://localhost:8000' for development
|
||||
timeout: 10000,
|
||||
refreshInterval: 30000,
|
||||
};
|
||||
@@ -204,17 +204,17 @@ export interface CameraApiClient {
|
||||
// System endpoints
|
||||
getHealth(): Promise<HealthResponse>;
|
||||
getSystemStatus(): Promise<SystemStatusResponse>;
|
||||
|
||||
|
||||
// Camera endpoints
|
||||
getCameras(): Promise<CameraListResponse>;
|
||||
getCameraStatus(cameraName: string): Promise<CameraInfo>;
|
||||
testCameraConnection(cameraName: string): Promise<{ success: boolean; message: string }>;
|
||||
|
||||
|
||||
// Streaming endpoints
|
||||
startStream(cameraName: string): Promise<StreamStartResponse>;
|
||||
stopStream(cameraName: string): Promise<StreamStopResponse>;
|
||||
getStreamUrl(cameraName: string): string;
|
||||
|
||||
|
||||
// Recording endpoints
|
||||
startRecording(cameraName: string, options?: StartRecordingRequest): Promise<StartRecordingResponse>;
|
||||
stopRecording(cameraName: string): Promise<StopRecordingResponse>;
|
||||
@@ -291,14 +291,14 @@ export interface CameraContextValue {
|
||||
streamingState: StreamingState;
|
||||
recordingState: RecordingState;
|
||||
apiClient: CameraApiClient;
|
||||
|
||||
|
||||
// Actions
|
||||
startStream: (cameraName: string) => Promise<CameraActionResult>;
|
||||
stopStream: (cameraName: string) => Promise<CameraActionResult>;
|
||||
startRecording: (cameraName: string, options?: StartRecordingRequest) => Promise<CameraActionResult>;
|
||||
stopRecording: (cameraName: string) => Promise<CameraActionResult>;
|
||||
refreshCameras: () => Promise<void>;
|
||||
|
||||
|
||||
// State
|
||||
loading: boolean;
|
||||
error: string | null;
|
||||
@@ -178,7 +178,7 @@
|
||||
</div>
|
||||
|
||||
<script>
|
||||
const API_BASE = 'http://localhost:8000';
|
||||
const API_BASE = 'http://vision:8000';
|
||||
let cameras = {};
|
||||
|
||||
// Initialize the page
|
||||
175
API Documentations/docs/API_CHANGES_SUMMARY.md
Normal file
175
API Documentations/docs/API_CHANGES_SUMMARY.md
Normal file
@@ -0,0 +1,175 @@
|
||||
# API Changes Summary: Camera Settings and Filename Handling
|
||||
|
||||
## Overview
|
||||
Enhanced the `POST /cameras/{camera_name}/start-recording` API endpoint to accept optional camera settings (shutter speed/exposure, gain, and fps) and ensure all filenames have datetime prefixes.
|
||||
|
||||
## Changes Made
|
||||
|
||||
### 1. API Models (`usda_vision_system/api/models.py`)
|
||||
- **Enhanced `StartRecordingRequest`** to include optional parameters:
|
||||
- `exposure_ms: Optional[float]` - Exposure time in milliseconds
|
||||
- `gain: Optional[float]` - Camera gain value
|
||||
- `fps: Optional[float]` - Target frames per second
|
||||
|
||||
### 2. Camera Recorder (`usda_vision_system/camera/recorder.py`)
|
||||
- **Added `update_camera_settings()` method** to dynamically update camera settings:
|
||||
- Updates exposure time using `mvsdk.CameraSetExposureTime()`
|
||||
- Updates gain using `mvsdk.CameraSetAnalogGain()`
|
||||
- Updates target FPS in camera configuration
|
||||
- Logs all setting changes
|
||||
- Returns boolean indicating success/failure
|
||||
|
||||
### 3. Camera Manager (`usda_vision_system/camera/manager.py`)
|
||||
- **Enhanced `manual_start_recording()` method** to accept new parameters:
|
||||
- Added optional `exposure_ms`, `gain`, and `fps` parameters
|
||||
- Calls `update_camera_settings()` if any settings are provided
|
||||
- **Automatic datetime prefix**: Always prepends timestamp to filename
|
||||
- If custom filename provided: `{timestamp}_{custom_filename}`
|
||||
- If no filename provided: `{camera_name}_manual_{timestamp}.avi`
|
||||
|
||||
### 4. API Server (`usda_vision_system/api/server.py`)
|
||||
- **Updated start-recording endpoint** to:
|
||||
- Pass new camera settings to camera manager
|
||||
- Handle filename response with datetime prefix
|
||||
- Maintain backward compatibility with existing requests
|
||||
|
||||
### 5. API Tests (`api-tests.http`)
|
||||
- **Added comprehensive test examples**:
|
||||
- Basic recording (existing functionality)
|
||||
- Recording with camera settings
|
||||
- Recording with settings only (no filename)
|
||||
- Different parameter combinations
|
||||
|
||||
## Usage Examples
|
||||
|
||||
### Basic Recording (unchanged)
|
||||
```http
|
||||
POST http://localhost:8000/cameras/camera1/start-recording
|
||||
Content-Type: application/json
|
||||
|
||||
{
|
||||
"camera_name": "camera1",
|
||||
"filename": "test.avi"
|
||||
}
|
||||
```
|
||||
**Result**: File saved as `20241223_143022_test.avi`
|
||||
|
||||
### Recording with Camera Settings
|
||||
```http
|
||||
POST http://localhost:8000/cameras/camera1/start-recording
|
||||
Content-Type: application/json
|
||||
|
||||
{
|
||||
"camera_name": "camera1",
|
||||
"filename": "high_quality.avi",
|
||||
"exposure_ms": 2.0,
|
||||
"gain": 4.0,
|
||||
"fps": 5.0
|
||||
}
|
||||
```
|
||||
**Result**:
|
||||
- Camera settings updated before recording
|
||||
- File saved as `20241223_143022_high_quality.avi`
|
||||
|
||||
### Maximum FPS Recording
|
||||
```http
|
||||
POST http://localhost:8000/cameras/camera1/start-recording
|
||||
Content-Type: application/json
|
||||
|
||||
{
|
||||
"camera_name": "camera1",
|
||||
"filename": "max_speed.avi",
|
||||
"exposure_ms": 0.1,
|
||||
"gain": 1.0,
|
||||
"fps": 0
|
||||
}
|
||||
```
|
||||
**Result**:
|
||||
- Camera captures at maximum possible speed (no delay between frames)
|
||||
- Video file saved with 30 FPS metadata for proper playback
|
||||
- Actual capture rate depends on camera hardware and exposure settings
|
||||
|
||||
### Settings Only (no filename)
|
||||
```http
|
||||
POST http://localhost:8000/cameras/camera1/start-recording
|
||||
Content-Type: application/json
|
||||
|
||||
{
|
||||
"camera_name": "camera1",
|
||||
"exposure_ms": 1.5,
|
||||
"gain": 3.0,
|
||||
"fps": 7.0
|
||||
}
|
||||
```
|
||||
**Result**:
|
||||
- Camera settings updated
|
||||
- File saved as `camera1_manual_20241223_143022.avi`
|
||||
|
||||
## Key Features
|
||||
|
||||
### 1. **Backward Compatibility**
|
||||
- All existing API calls continue to work unchanged
|
||||
- New parameters are optional
|
||||
- Default behavior preserved when no settings provided
|
||||
|
||||
### 2. **Automatic Datetime Prefix**
|
||||
- **ALL filenames now have datetime prefix** regardless of what's sent
|
||||
- Format: `YYYYMMDD_HHMMSS_` (Atlanta timezone)
|
||||
- Ensures unique filenames and chronological ordering
|
||||
|
||||
### 3. **Dynamic Camera Settings**
|
||||
- Settings can be changed per recording without restarting system
|
||||
- Based on proven implementation from `old tests/camera_video_recorder.py`
|
||||
- Proper error handling and logging
|
||||
|
||||
### 4. **Maximum FPS Capture**
|
||||
- **`fps: 0`** = Capture at maximum possible speed (no delay between frames)
|
||||
- **`fps > 0`** = Capture at specified frame rate with controlled timing
|
||||
- **`fps` omitted** = Uses camera config default (usually 3.0 fps)
|
||||
- Video files saved with 30 FPS metadata when fps=0 for proper playback
|
||||
|
||||
### 5. **Parameter Validation**
|
||||
- Uses Pydantic models for automatic validation
|
||||
- Optional parameters with proper type checking
|
||||
- Descriptive field documentation
|
||||
|
||||
## Testing
|
||||
|
||||
Run the test script to verify functionality:
|
||||
```bash
|
||||
# Start the system first
|
||||
python main.py
|
||||
|
||||
# In another terminal, run tests
|
||||
python test_api_changes.py
|
||||
```
|
||||
|
||||
The test script verifies:
|
||||
- Basic recording functionality
|
||||
- Camera settings application
|
||||
- Filename datetime prefix handling
|
||||
- API response accuracy
|
||||
|
||||
## Implementation Notes
|
||||
|
||||
### Camera Settings Mapping
|
||||
- **Exposure**: Converted from milliseconds to microseconds for SDK
|
||||
- **Gain**: Converted to camera units (multiplied by 100)
|
||||
- **FPS**: Stored in camera config, used by recording loop
|
||||
|
||||
### Error Handling
|
||||
- Settings update failures are logged but don't prevent recording
|
||||
- Invalid camera names return appropriate HTTP errors
|
||||
- Camera initialization failures are handled gracefully
|
||||
|
||||
### Filename Generation
|
||||
- Uses `format_filename_timestamp()` from timezone utilities
|
||||
- Ensures Atlanta timezone consistency
|
||||
- Handles both custom and auto-generated filenames
|
||||
|
||||
## Similar to Old Implementation
|
||||
The camera settings functionality mirrors the proven approach in `old tests/camera_video_recorder.py`:
|
||||
- Same parameter names and ranges
|
||||
- Same SDK function calls
|
||||
- Same conversion factors
|
||||
- Proven to work with the camera hardware
|
||||
627
API Documentations/docs/API_DOCUMENTATION.md
Normal file
627
API Documentations/docs/API_DOCUMENTATION.md
Normal file
@@ -0,0 +1,627 @@
|
||||
# 🚀 USDA Vision Camera System - Complete API Documentation
|
||||
|
||||
This document provides comprehensive documentation for all API endpoints in the USDA Vision Camera System, including recent enhancements and new features.
|
||||
|
||||
## 📋 Table of Contents
|
||||
|
||||
- [🔧 System Status & Health](#-system-status--health)
|
||||
- [📷 Camera Management](#-camera-management)
|
||||
- [🎥 Recording Control](#-recording-control)
|
||||
- [🤖 Auto-Recording Management](#-auto-recording-management)
|
||||
- [🎛️ Camera Configuration](#️-camera-configuration)
|
||||
- [📡 MQTT & Machine Status](#-mqtt--machine-status)
|
||||
- [💾 Storage & File Management](#-storage--file-management)
|
||||
- [🔄 Camera Recovery & Diagnostics](#-camera-recovery--diagnostics)
|
||||
- [📺 Live Streaming](#-live-streaming)
|
||||
- [🌐 WebSocket Real-time Updates](#-websocket-real-time-updates)
|
||||
|
||||
## 🔧 System Status & Health
|
||||
|
||||
### Get System Status
|
||||
```http
|
||||
GET /system/status
|
||||
```
|
||||
**Response**: `SystemStatusResponse`
|
||||
```json
|
||||
{
|
||||
"system_started": true,
|
||||
"mqtt_connected": true,
|
||||
"last_mqtt_message": "2024-01-15T10:30:00Z",
|
||||
"machines": {
|
||||
"vibratory_conveyor": {
|
||||
"name": "vibratory_conveyor",
|
||||
"state": "ON",
|
||||
"last_updated": "2024-01-15T10:30:00Z"
|
||||
}
|
||||
},
|
||||
"cameras": {
|
||||
"camera1": {
|
||||
"name": "camera1",
|
||||
"status": "ACTIVE",
|
||||
"is_recording": false,
|
||||
"auto_recording_enabled": true
|
||||
}
|
||||
},
|
||||
"active_recordings": 0,
|
||||
"total_recordings": 15,
|
||||
"uptime_seconds": 3600.5
|
||||
}
|
||||
```
|
||||
|
||||
### Health Check
|
||||
```http
|
||||
GET /health
|
||||
```
|
||||
**Response**: Simple health status
|
||||
```json
|
||||
{
|
||||
"status": "healthy",
|
||||
"timestamp": "2024-01-15T10:30:00Z"
|
||||
}
|
||||
```
|
||||
|
||||
## 📷 Camera Management
|
||||
|
||||
### Get All Cameras
|
||||
```http
|
||||
GET /cameras
|
||||
```
|
||||
**Response**: `Dict[str, CameraStatusResponse]`
|
||||
|
||||
### Get Specific Camera Status
|
||||
```http
|
||||
GET /cameras/{camera_name}/status
|
||||
```
|
||||
**Response**: `CameraStatusResponse`
|
||||
```json
|
||||
{
|
||||
"name": "camera1",
|
||||
"status": "ACTIVE",
|
||||
"is_recording": false,
|
||||
"last_checked": "2024-01-15T10:30:00Z",
|
||||
"last_error": null,
|
||||
"device_info": {
|
||||
"model": "GigE Camera",
|
||||
"serial": "12345"
|
||||
},
|
||||
"current_recording_file": null,
|
||||
"recording_start_time": null,
|
||||
"auto_recording_enabled": true,
|
||||
"auto_recording_active": false,
|
||||
"auto_recording_failure_count": 0,
|
||||
"auto_recording_last_attempt": null,
|
||||
"auto_recording_last_error": null
|
||||
}
|
||||
```
|
||||
|
||||
## 🎥 Recording Control
|
||||
|
||||
### Start Recording
|
||||
```http
|
||||
POST /cameras/{camera_name}/start-recording
|
||||
Content-Type: application/json
|
||||
|
||||
{
|
||||
"filename": "test_recording.avi",
|
||||
"exposure_ms": 2.0,
|
||||
"gain": 4.0,
|
||||
"fps": 5.0
|
||||
}
|
||||
```
|
||||
|
||||
**Request Model**: `StartRecordingRequest`
|
||||
- `filename` (optional): Custom filename (datetime prefix will be added automatically)
|
||||
- `exposure_ms` (optional): Exposure time in milliseconds
|
||||
- `gain` (optional): Camera gain value
|
||||
- `fps` (optional): Target frames per second
|
||||
|
||||
**Response**: `StartRecordingResponse`
|
||||
```json
|
||||
{
|
||||
"success": true,
|
||||
"message": "Recording started for camera1",
|
||||
"filename": "20240115_103000_test_recording.avi"
|
||||
}
|
||||
```
|
||||
|
||||
**Key Features**:
|
||||
- ✅ **Automatic datetime prefix**: All filenames get `YYYYMMDD_HHMMSS_` prefix
|
||||
- ✅ **Dynamic camera settings**: Adjust exposure, gain, and FPS per recording
|
||||
- ✅ **Backward compatibility**: All existing API calls work unchanged
|
||||
|
||||
### Stop Recording
|
||||
```http
|
||||
POST /cameras/{camera_name}/stop-recording
|
||||
```
|
||||
**Response**: `StopRecordingResponse`
|
||||
```json
|
||||
{
|
||||
"success": true,
|
||||
"message": "Recording stopped for camera1",
|
||||
"duration_seconds": 45.2
|
||||
}
|
||||
```
|
||||
|
||||
## 🤖 Auto-Recording Management
|
||||
|
||||
### Enable Auto-Recording for Camera
|
||||
```http
|
||||
POST /cameras/{camera_name}/auto-recording/enable
|
||||
```
|
||||
**Response**: `AutoRecordingConfigResponse`
|
||||
```json
|
||||
{
|
||||
"success": true,
|
||||
"message": "Auto-recording enabled for camera1",
|
||||
"camera_name": "camera1",
|
||||
"enabled": true
|
||||
}
|
||||
```
|
||||
|
||||
### Disable Auto-Recording for Camera
|
||||
```http
|
||||
POST /cameras/{camera_name}/auto-recording/disable
|
||||
```
|
||||
**Response**: `AutoRecordingConfigResponse`
|
||||
|
||||
### Get Auto-Recording Status
|
||||
```http
|
||||
GET /auto-recording/status
|
||||
```
|
||||
**Response**: `AutoRecordingStatusResponse`
|
||||
```json
|
||||
{
|
||||
"running": true,
|
||||
"auto_recording_enabled": true,
|
||||
"retry_queue": {},
|
||||
"enabled_cameras": ["camera1", "camera2"]
|
||||
}
|
||||
```
|
||||
|
||||
**Auto-Recording Features**:
|
||||
- 🤖 **MQTT-triggered recording**: Automatically starts/stops based on machine state
|
||||
- 🔄 **Retry logic**: Failed recordings are retried with configurable delays
|
||||
- 📊 **Per-camera control**: Enable/disable auto-recording individually
|
||||
- 📈 **Status tracking**: Monitor failure counts and last attempts
|
||||
|
||||
## 🎛️ Camera Configuration
|
||||
|
||||
### Get Camera Configuration
|
||||
```http
|
||||
GET /cameras/{camera_name}/config
|
||||
```
|
||||
**Response**: `CameraConfigResponse`
|
||||
```json
|
||||
{
|
||||
"name": "camera1",
|
||||
"machine_topic": "vibratory_conveyor",
|
||||
"storage_path": "/storage/camera1",
|
||||
"enabled": true,
|
||||
"exposure_ms": 1.0,
|
||||
"gain": 3.5,
|
||||
"target_fps": 3.0,
|
||||
"auto_start_recording_enabled": true,
|
||||
"sharpness": 120,
|
||||
"contrast": 110,
|
||||
"saturation": 100,
|
||||
"gamma": 100,
|
||||
"noise_filter_enabled": true,
|
||||
"denoise_3d_enabled": false,
|
||||
"auto_white_balance": true,
|
||||
"color_temperature_preset": 0,
|
||||
"anti_flicker_enabled": true,
|
||||
"light_frequency": 1,
|
||||
"bit_depth": 8,
|
||||
"hdr_enabled": false,
|
||||
"hdr_gain_mode": 0
|
||||
}
|
||||
```
|
||||
|
||||
### Update Camera Configuration
|
||||
```http
|
||||
PUT /cameras/{camera_name}/config
|
||||
Content-Type: application/json
|
||||
|
||||
{
|
||||
"exposure_ms": 2.0,
|
||||
"gain": 4.0,
|
||||
"target_fps": 5.0,
|
||||
"sharpness": 130
|
||||
}
|
||||
```
|
||||
|
||||
### Apply Configuration (Restart Required)
|
||||
```http
|
||||
POST /cameras/{camera_name}/apply-config
|
||||
```
|
||||
|
||||
**Configuration Categories**:
|
||||
- ✅ **Real-time**: `exposure_ms`, `gain`, `target_fps`, `sharpness`, `contrast`, etc.
|
||||
- ⚠️ **Restart required**: `noise_filter_enabled`, `denoise_3d_enabled`, `bit_depth`
|
||||
|
||||
For detailed configuration options, see [Camera Configuration API Guide](api/CAMERA_CONFIG_API.md).
|
||||
|
||||
## 📡 MQTT & Machine Status
|
||||
|
||||
### Get All Machines
|
||||
```http
|
||||
GET /machines
|
||||
```
|
||||
**Response**: `Dict[str, MachineStatusResponse]`
|
||||
|
||||
### Get MQTT Status
|
||||
```http
|
||||
GET /mqtt/status
|
||||
```
|
||||
**Response**: `MQTTStatusResponse`
|
||||
```json
|
||||
{
|
||||
"connected": true,
|
||||
"broker_host": "192.168.1.110",
|
||||
"broker_port": 1883,
|
||||
"subscribed_topics": ["vibratory_conveyor", "blower_separator"],
|
||||
"last_message_time": "2024-01-15T10:30:00Z",
|
||||
"message_count": 1250,
|
||||
"error_count": 2,
|
||||
"uptime_seconds": 3600.5
|
||||
}
|
||||
```
|
||||
|
||||
### Get MQTT Events History
|
||||
```http
|
||||
GET /mqtt/events?limit=10
|
||||
```
|
||||
**Response**: `MQTTEventsHistoryResponse`
|
||||
```json
|
||||
{
|
||||
"events": [
|
||||
{
|
||||
"machine_name": "vibratory_conveyor",
|
||||
"topic": "vibratory_conveyor",
|
||||
"payload": "ON",
|
||||
"normalized_state": "ON",
|
||||
"timestamp": "2024-01-15T10:30:00Z",
|
||||
"message_number": 1250
|
||||
}
|
||||
],
|
||||
"total_events": 1250,
|
||||
"last_updated": "2024-01-15T10:30:00Z"
|
||||
}
|
||||
```
|
||||
|
||||
## 💾 Storage & File Management
|
||||
|
||||
### Get Storage Statistics
|
||||
```http
|
||||
GET /storage/stats
|
||||
```
|
||||
**Response**: `StorageStatsResponse`
|
||||
```json
|
||||
{
|
||||
"base_path": "/storage",
|
||||
"total_files": 150,
|
||||
"total_size_bytes": 5368709120,
|
||||
"cameras": {
|
||||
"camera1": {
|
||||
"file_count": 75,
|
||||
"total_size_bytes": 2684354560
|
||||
},
|
||||
"camera2": {
|
||||
"file_count": 75,
|
||||
"total_size_bytes": 2684354560
|
||||
}
|
||||
},
|
||||
"disk_usage": {
|
||||
"total_bytes": 107374182400,
|
||||
"used_bytes": 53687091200,
|
||||
"free_bytes": 53687091200,
|
||||
"usage_percent": 50.0
|
||||
}
|
||||
}
|
||||
```
|
||||
|
||||
### Get File List
|
||||
```http
|
||||
POST /storage/files
|
||||
Content-Type: application/json
|
||||
|
||||
{
|
||||
"camera_name": "camera1",
|
||||
"start_date": "2024-01-15",
|
||||
"end_date": "2024-01-16",
|
||||
"limit": 50
|
||||
}
|
||||
```
|
||||
**Response**: `FileListResponse`
|
||||
```json
|
||||
{
|
||||
"files": [
|
||||
{
|
||||
"filename": "20240115_103000_test_recording.avi",
|
||||
"camera_name": "camera1",
|
||||
"size_bytes": 52428800,
|
||||
"created_time": "2024-01-15T10:30:00Z",
|
||||
"duration_seconds": 30.5
|
||||
}
|
||||
],
|
||||
"total_count": 1
|
||||
}
|
||||
```
|
||||
|
||||
### Cleanup Old Files
|
||||
```http
|
||||
POST /storage/cleanup
|
||||
Content-Type: application/json
|
||||
|
||||
{
|
||||
"max_age_days": 30
|
||||
}
|
||||
```
|
||||
**Response**: `CleanupResponse`
|
||||
```json
|
||||
{
|
||||
"files_removed": 25,
|
||||
"bytes_freed": 1073741824,
|
||||
"errors": []
|
||||
}
|
||||
```
|
||||
|
||||
## 🔄 Camera Recovery & Diagnostics
|
||||
|
||||
### Test Camera Connection
|
||||
```http
|
||||
POST /cameras/{camera_name}/test-connection
|
||||
```
|
||||
**Response**: `CameraTestResponse`
|
||||
|
||||
### Reconnect Camera
|
||||
```http
|
||||
POST /cameras/{camera_name}/reconnect
|
||||
```
|
||||
**Response**: `CameraRecoveryResponse`
|
||||
|
||||
### Restart Camera Grab Process
|
||||
```http
|
||||
POST /cameras/{camera_name}/restart-grab
|
||||
```
|
||||
**Response**: `CameraRecoveryResponse`
|
||||
|
||||
### Reset Camera Timestamp
|
||||
```http
|
||||
POST /cameras/{camera_name}/reset-timestamp
|
||||
```
|
||||
**Response**: `CameraRecoveryResponse`
|
||||
|
||||
### Full Camera Reset
|
||||
```http
|
||||
POST /cameras/{camera_name}/full-reset
|
||||
```
|
||||
**Response**: `CameraRecoveryResponse`
|
||||
|
||||
### Reinitialize Camera
|
||||
```http
|
||||
POST /cameras/{camera_name}/reinitialize
|
||||
```
|
||||
**Response**: `CameraRecoveryResponse`
|
||||
|
||||
**Recovery Response Example**:
|
||||
```json
|
||||
{
|
||||
"success": true,
|
||||
"message": "Camera camera1 reconnected successfully",
|
||||
"camera_name": "camera1",
|
||||
"operation": "reconnect",
|
||||
"timestamp": "2024-01-15T10:30:00Z"
|
||||
}
|
||||
```
|
||||
|
||||
## 📺 Live Streaming
|
||||
|
||||
### Get Live MJPEG Stream
|
||||
```http
|
||||
GET /cameras/{camera_name}/stream
|
||||
```
|
||||
**Response**: MJPEG video stream (multipart/x-mixed-replace)
|
||||
|
||||
### Start Camera Stream
|
||||
```http
|
||||
POST /cameras/{camera_name}/start-stream
|
||||
```
|
||||
|
||||
### Stop Camera Stream
|
||||
```http
|
||||
POST /cameras/{camera_name}/stop-stream
|
||||
```
|
||||
|
||||
**Streaming Features**:
|
||||
- 📺 **MJPEG format**: Compatible with web browsers and React apps
|
||||
- 🔄 **Concurrent operation**: Stream while recording simultaneously
|
||||
- ⚡ **Low latency**: Real-time preview for monitoring
|
||||
|
||||
For detailed streaming integration, see [Streaming Guide](guides/STREAMING_GUIDE.md).
|
||||
|
||||
## 🌐 WebSocket Real-time Updates
|
||||
|
||||
### Connect to WebSocket
|
||||
```javascript
|
||||
const ws = new WebSocket('ws://localhost:8000/ws');
|
||||
|
||||
ws.onmessage = (event) => {
|
||||
const update = JSON.parse(event.data);
|
||||
console.log('Real-time update:', update);
|
||||
};
|
||||
```
|
||||
|
||||
**WebSocket Message Types**:
|
||||
- `system_status`: System status changes
|
||||
- `camera_status`: Camera status updates
|
||||
- `recording_started`: Recording start events
|
||||
- `recording_stopped`: Recording stop events
|
||||
- `mqtt_message`: MQTT message received
|
||||
- `auto_recording_event`: Auto-recording status changes
|
||||
|
||||
**Example WebSocket Message**:
|
||||
```json
|
||||
{
|
||||
"type": "recording_started",
|
||||
"data": {
|
||||
"camera_name": "camera1",
|
||||
"filename": "20240115_103000_auto_recording.avi",
|
||||
"timestamp": "2024-01-15T10:30:00Z"
|
||||
},
|
||||
"timestamp": "2024-01-15T10:30:00Z"
|
||||
}
|
||||
```
|
||||
|
||||
## 🚀 Quick Start Examples
|
||||
|
||||
### Basic System Monitoring
|
||||
```bash
|
||||
# Check system health
|
||||
curl http://localhost:8000/health
|
||||
|
||||
# Get overall system status
|
||||
curl http://localhost:8000/system/status
|
||||
|
||||
# Get all camera statuses
|
||||
curl http://localhost:8000/cameras
|
||||
```
|
||||
|
||||
### Manual Recording Control
|
||||
```bash
|
||||
# Start recording with default settings
|
||||
curl -X POST http://localhost:8000/cameras/camera1/start-recording \
|
||||
-H "Content-Type: application/json" \
|
||||
-d '{"filename": "manual_test.avi"}'
|
||||
|
||||
# Start recording with custom camera settings
|
||||
curl -X POST http://localhost:8000/cameras/camera1/start-recording \
|
||||
-H "Content-Type: application/json" \
|
||||
-d '{
|
||||
"filename": "high_quality.avi",
|
||||
"exposure_ms": 2.0,
|
||||
"gain": 4.0,
|
||||
"fps": 5.0
|
||||
}'
|
||||
|
||||
# Stop recording
|
||||
curl -X POST http://localhost:8000/cameras/camera1/stop-recording
|
||||
```
|
||||
|
||||
### Auto-Recording Management
|
||||
```bash
|
||||
# Enable auto-recording for camera1
|
||||
curl -X POST http://localhost:8000/cameras/camera1/auto-recording/enable
|
||||
|
||||
# Check auto-recording status
|
||||
curl http://localhost:8000/auto-recording/status
|
||||
|
||||
# Disable auto-recording for camera1
|
||||
curl -X POST http://localhost:8000/cameras/camera1/auto-recording/disable
|
||||
```
|
||||
|
||||
### Camera Configuration
|
||||
```bash
|
||||
# Get current camera configuration
|
||||
curl http://localhost:8000/cameras/camera1/config
|
||||
|
||||
# Update camera settings (real-time)
|
||||
curl -X PUT http://localhost:8000/cameras/camera1/config \
|
||||
-H "Content-Type: application/json" \
|
||||
-d '{
|
||||
"exposure_ms": 1.5,
|
||||
"gain": 3.0,
|
||||
"sharpness": 130,
|
||||
"contrast": 120
|
||||
}'
|
||||
```
|
||||
|
||||
## 📈 Recent API Changes & Enhancements
|
||||
|
||||
### ✨ New in Latest Version
|
||||
|
||||
#### 1. Enhanced Recording API
|
||||
- **Dynamic camera settings**: Set exposure, gain, and FPS per recording
|
||||
- **Automatic datetime prefixes**: All filenames get timestamp prefixes
|
||||
- **Backward compatibility**: Existing API calls work unchanged
|
||||
|
||||
#### 2. Auto-Recording Feature
|
||||
- **Per-camera control**: Enable/disable auto-recording individually
|
||||
- **MQTT integration**: Automatic recording based on machine states
|
||||
- **Retry logic**: Failed recordings are automatically retried
|
||||
- **Status tracking**: Monitor auto-recording attempts and failures
|
||||
|
||||
#### 3. Advanced Camera Configuration
|
||||
- **Real-time settings**: Update exposure, gain, image quality without restart
|
||||
- **Image enhancement**: Sharpness, contrast, saturation, gamma controls
|
||||
- **Noise reduction**: Configurable noise filtering and 3D denoising
|
||||
- **HDR support**: High Dynamic Range imaging capabilities
|
||||
|
||||
#### 4. Live Streaming
|
||||
- **MJPEG streaming**: Real-time camera preview
|
||||
- **Concurrent operation**: Stream while recording simultaneously
|
||||
- **Web-compatible**: Direct integration with React/HTML video elements
|
||||
|
||||
#### 5. Enhanced Monitoring
|
||||
- **MQTT event history**: Track machine state changes over time
|
||||
- **Storage statistics**: Monitor disk usage and file counts
|
||||
- **WebSocket updates**: Real-time system status notifications
|
||||
|
||||
### 🔄 Migration Notes
|
||||
|
||||
#### From Previous Versions
|
||||
1. **Recording API**: All existing calls work, but now return filenames with datetime prefixes
|
||||
2. **Configuration**: New camera settings are optional and backward compatible
|
||||
3. **Auto-recording**: New feature, requires enabling in `config.json` and per camera
|
||||
|
||||
#### Configuration Updates
|
||||
```json
|
||||
{
|
||||
"cameras": [
|
||||
{
|
||||
"name": "camera1",
|
||||
"auto_start_recording_enabled": true, // NEW: Enable auto-recording
|
||||
"sharpness": 120, // NEW: Image quality settings
|
||||
"contrast": 110,
|
||||
"saturation": 100,
|
||||
"gamma": 100,
|
||||
"noise_filter_enabled": true,
|
||||
"hdr_enabled": false
|
||||
}
|
||||
],
|
||||
"system": {
|
||||
"auto_recording_enabled": true // NEW: Global auto-recording toggle
|
||||
}
|
||||
}
|
||||
```
|
||||
|
||||
## 🔗 Related Documentation
|
||||
|
||||
- [📷 Camera Configuration API Guide](api/CAMERA_CONFIG_API.md) - Detailed camera settings
|
||||
- [🤖 Auto-Recording Feature Guide](features/AUTO_RECORDING_FEATURE_GUIDE.md) - React integration
|
||||
- [📺 Streaming Guide](guides/STREAMING_GUIDE.md) - Live video streaming
|
||||
- [🔧 Camera Recovery Guide](guides/CAMERA_RECOVERY_GUIDE.md) - Troubleshooting
|
||||
- [📡 MQTT Logging Guide](guides/MQTT_LOGGING_GUIDE.md) - MQTT configuration
|
||||
|
||||
## 📞 Support & Integration
|
||||
|
||||
### API Base URL
|
||||
- **Development**: `http://localhost:8000`
|
||||
- **Production**: Configure in `config.json` under `system.api_host` and `system.api_port`
|
||||
|
||||
### Error Handling
|
||||
All endpoints return standard HTTP status codes:
|
||||
- `200`: Success
|
||||
- `404`: Resource not found (camera, file, etc.)
|
||||
- `500`: Internal server error
|
||||
- `503`: Service unavailable (camera manager, MQTT, etc.)
|
||||
|
||||
### Rate Limiting
|
||||
- No rate limiting currently implemented
|
||||
- WebSocket connections are limited to reasonable concurrent connections
|
||||
|
||||
### CORS Support
|
||||
- CORS is enabled for web dashboard integration
|
||||
- Configure allowed origins in the API server settings
|
||||
```
|
||||
```
|
||||
195
API Documentations/docs/API_QUICK_REFERENCE.md
Normal file
195
API Documentations/docs/API_QUICK_REFERENCE.md
Normal file
@@ -0,0 +1,195 @@
|
||||
# 🚀 USDA Vision Camera System - API Quick Reference
|
||||
|
||||
Quick reference for the most commonly used API endpoints. For complete documentation, see [API_DOCUMENTATION.md](API_DOCUMENTATION.md).
|
||||
|
||||
## 🔧 System Status
|
||||
|
||||
```bash
|
||||
# Health check
|
||||
curl http://localhost:8000/health
|
||||
|
||||
# System overview
|
||||
curl http://localhost:8000/system/status
|
||||
|
||||
# All cameras
|
||||
curl http://localhost:8000/cameras
|
||||
|
||||
# All machines
|
||||
curl http://localhost:8000/machines
|
||||
```
|
||||
|
||||
## 🎥 Recording Control
|
||||
|
||||
### Start Recording (Basic)
|
||||
```bash
|
||||
curl -X POST http://localhost:8000/cameras/camera1/start-recording \
|
||||
-H "Content-Type: application/json" \
|
||||
-d '{"filename": "test.avi"}'
|
||||
```
|
||||
|
||||
### Start Recording (With Settings)
|
||||
```bash
|
||||
curl -X POST http://localhost:8000/cameras/camera1/start-recording \
|
||||
-H "Content-Type: application/json" \
|
||||
-d '{
|
||||
"filename": "high_quality.avi",
|
||||
"exposure_ms": 2.0,
|
||||
"gain": 4.0,
|
||||
"fps": 5.0
|
||||
}'
|
||||
```
|
||||
|
||||
### Stop Recording
|
||||
```bash
|
||||
curl -X POST http://localhost:8000/cameras/camera1/stop-recording
|
||||
```
|
||||
|
||||
## 🤖 Auto-Recording
|
||||
|
||||
```bash
|
||||
# Enable auto-recording
|
||||
curl -X POST http://localhost:8000/cameras/camera1/auto-recording/enable
|
||||
|
||||
# Disable auto-recording
|
||||
curl -X POST http://localhost:8000/cameras/camera1/auto-recording/disable
|
||||
|
||||
# Check auto-recording status
|
||||
curl http://localhost:8000/auto-recording/status
|
||||
```
|
||||
|
||||
## 🎛️ Camera Configuration
|
||||
|
||||
```bash
|
||||
# Get camera config
|
||||
curl http://localhost:8000/cameras/camera1/config
|
||||
|
||||
# Update camera settings
|
||||
curl -X PUT http://localhost:8000/cameras/camera1/config \
|
||||
-H "Content-Type: application/json" \
|
||||
-d '{
|
||||
"exposure_ms": 1.5,
|
||||
"gain": 3.0,
|
||||
"sharpness": 130
|
||||
}'
|
||||
```
|
||||
|
||||
## 📺 Live Streaming
|
||||
|
||||
```bash
|
||||
# Start streaming
|
||||
curl -X POST http://localhost:8000/cameras/camera1/start-stream
|
||||
|
||||
# Get MJPEG stream (use in browser/video element)
|
||||
# http://localhost:8000/cameras/camera1/stream
|
||||
|
||||
# Stop streaming
|
||||
curl -X POST http://localhost:8000/cameras/camera1/stop-stream
|
||||
```
|
||||
|
||||
## 🔄 Camera Recovery
|
||||
|
||||
```bash
|
||||
# Test connection
|
||||
curl -X POST http://localhost:8000/cameras/camera1/test-connection
|
||||
|
||||
# Reconnect camera
|
||||
curl -X POST http://localhost:8000/cameras/camera1/reconnect
|
||||
|
||||
# Full reset
|
||||
curl -X POST http://localhost:8000/cameras/camera1/full-reset
|
||||
```
|
||||
|
||||
## 💾 Storage Management
|
||||
|
||||
```bash
|
||||
# Storage statistics
|
||||
curl http://localhost:8000/storage/stats
|
||||
|
||||
# List files
|
||||
curl -X POST http://localhost:8000/storage/files \
|
||||
-H "Content-Type: application/json" \
|
||||
-d '{"camera_name": "camera1", "limit": 10}'
|
||||
|
||||
# Cleanup old files
|
||||
curl -X POST http://localhost:8000/storage/cleanup \
|
||||
-H "Content-Type: application/json" \
|
||||
-d '{"max_age_days": 30}'
|
||||
```
|
||||
|
||||
## 📡 MQTT Monitoring
|
||||
|
||||
```bash
|
||||
# MQTT status
|
||||
curl http://localhost:8000/mqtt/status
|
||||
|
||||
# Recent MQTT events
|
||||
curl http://localhost:8000/mqtt/events?limit=10
|
||||
```
|
||||
|
||||
## 🌐 WebSocket Connection
|
||||
|
||||
```javascript
|
||||
// Connect to real-time updates
|
||||
const ws = new WebSocket('ws://localhost:8000/ws');
|
||||
|
||||
ws.onmessage = (event) => {
|
||||
const update = JSON.parse(event.data);
|
||||
console.log('Update:', update);
|
||||
};
|
||||
```
|
||||
|
||||
## 📊 Response Examples
|
||||
|
||||
### System Status Response
|
||||
```json
|
||||
{
|
||||
"system_started": true,
|
||||
"mqtt_connected": true,
|
||||
"cameras": {
|
||||
"camera1": {
|
||||
"name": "camera1",
|
||||
"status": "ACTIVE",
|
||||
"is_recording": false,
|
||||
"auto_recording_enabled": true
|
||||
}
|
||||
},
|
||||
"active_recordings": 0,
|
||||
"total_recordings": 15
|
||||
}
|
||||
```
|
||||
|
||||
### Recording Start Response
|
||||
```json
|
||||
{
|
||||
"success": true,
|
||||
"message": "Recording started for camera1",
|
||||
"filename": "20240115_103000_test.avi"
|
||||
}
|
||||
```
|
||||
|
||||
### Camera Status Response
|
||||
```json
|
||||
{
|
||||
"name": "camera1",
|
||||
"status": "ACTIVE",
|
||||
"is_recording": false,
|
||||
"auto_recording_enabled": true,
|
||||
"auto_recording_active": false,
|
||||
"auto_recording_failure_count": 0
|
||||
}
|
||||
```
|
||||
|
||||
## 🔗 Related Documentation
|
||||
|
||||
- [📚 Complete API Documentation](API_DOCUMENTATION.md)
|
||||
- [🎛️ Camera Configuration Guide](api/CAMERA_CONFIG_API.md)
|
||||
- [🤖 Auto-Recording Feature Guide](features/AUTO_RECORDING_FEATURE_GUIDE.md)
|
||||
- [📺 Streaming Guide](guides/STREAMING_GUIDE.md)
|
||||
|
||||
## 💡 Tips
|
||||
|
||||
- All filenames automatically get datetime prefixes: `YYYYMMDD_HHMMSS_`
|
||||
- Camera settings can be updated in real-time during recording
|
||||
- Auto-recording is controlled per camera and globally
|
||||
- WebSocket provides real-time updates for dashboard integration
|
||||
- CORS is enabled for web application integration
|
||||
212
API Documentations/docs/PROJECT_COMPLETE.md
Normal file
212
API Documentations/docs/PROJECT_COMPLETE.md
Normal file
@@ -0,0 +1,212 @@
|
||||
# 🎉 USDA Vision Camera System - PROJECT COMPLETE!
|
||||
|
||||
## ✅ Final Status: READY FOR PRODUCTION
|
||||
|
||||
The USDA Vision Camera System has been successfully implemented, tested, and documented. All requirements have been met and the system is production-ready.
|
||||
|
||||
## 📋 Completed Requirements
|
||||
|
||||
### ✅ Core Functionality
|
||||
- **MQTT Integration**: Dual topic listening for machine states
|
||||
- **Automatic Recording**: Camera recording triggered by machine on/off states
|
||||
- **GigE Camera Support**: Full integration with camera SDK library
|
||||
- **Multi-threading**: Concurrent MQTT + camera monitoring + recording
|
||||
- **File Management**: Timestamp-based naming in organized directories
|
||||
|
||||
### ✅ Advanced Features
|
||||
- **REST API**: Complete FastAPI server with all endpoints
|
||||
- **WebSocket Support**: Real-time updates for dashboard integration
|
||||
- **Time Synchronization**: Atlanta, Georgia timezone with NTP sync
|
||||
- **Storage Management**: File indexing, cleanup, and statistics
|
||||
- **Comprehensive Logging**: Rotating logs with error tracking
|
||||
- **Configuration System**: JSON-based configuration management
|
||||
|
||||
### ✅ Documentation & Testing
|
||||
- **Complete README**: Installation, usage, API docs, troubleshooting
|
||||
- **Test Suite**: Comprehensive system testing (`test_system.py`)
|
||||
- **Time Verification**: Timezone and sync testing (`check_time.py`)
|
||||
- **Startup Scripts**: Easy deployment with `start_system.sh`
|
||||
- **Clean Repository**: Organized structure with proper .gitignore
|
||||
|
||||
## 🏗️ Final Project Structure
|
||||
|
||||
```
|
||||
USDA-Vision-Cameras/
|
||||
├── README.md # Complete documentation
|
||||
├── main.py # System entry point
|
||||
├── config.json # System configuration
|
||||
├── requirements.txt # Python dependencies
|
||||
├── pyproject.toml # UV package configuration
|
||||
├── .gitignore # Git ignore rules
|
||||
├── start_system.sh # Startup script
|
||||
├── setup_timezone.sh # Time sync setup
|
||||
├── test_system.py # System test suite
|
||||
├── check_time.py # Time verification
|
||||
├── test_timezone.py # Timezone testing
|
||||
├── usda_vision_system/ # Main application
|
||||
│ ├── core/ # Core functionality
|
||||
│ ├── mqtt/ # MQTT integration
|
||||
│ ├── camera/ # Camera management
|
||||
│ ├── storage/ # File management
|
||||
│ ├── api/ # REST API server
|
||||
│ └── main.py # Application coordinator
|
||||
├── camera_sdk/ # GigE camera SDK library
|
||||
├── demos/ # Demo and example code
|
||||
│ ├── cv_grab*.py # Camera SDK usage examples
|
||||
│ └── mqtt_*.py # MQTT demo scripts
|
||||
├── storage/ # Recording storage
|
||||
│ ├── camera1/ # Camera 1 recordings
|
||||
│ └── camera2/ # Camera 2 recordings
|
||||
├── tests/ # Test files and legacy tests
|
||||
├── notebooks/ # Jupyter notebooks
|
||||
└── docs/ # Documentation files
|
||||
```
|
||||
|
||||
## 🚀 How to Deploy
|
||||
|
||||
### 1. Clone and Setup
|
||||
```bash
|
||||
git clone https://github.com/your-username/USDA-Vision-Cameras.git
|
||||
cd USDA-Vision-Cameras
|
||||
uv sync
|
||||
```
|
||||
|
||||
### 2. Configure System
|
||||
```bash
|
||||
# Edit config.json for your environment
|
||||
# Set MQTT broker, camera settings, storage paths
|
||||
```
|
||||
|
||||
### 3. Setup Time Sync
|
||||
```bash
|
||||
./setup_timezone.sh
|
||||
```
|
||||
|
||||
### 4. Test System
|
||||
```bash
|
||||
python test_system.py
|
||||
```
|
||||
|
||||
### 5. Start System
|
||||
```bash
|
||||
./start_system.sh
|
||||
```
|
||||
|
||||
## 🌐 API Integration
|
||||
|
||||
### Dashboard Integration
|
||||
```javascript
|
||||
// React component example
|
||||
const systemStatus = await fetch('http://localhost:8000/system/status');
|
||||
const cameras = await fetch('http://localhost:8000/cameras');
|
||||
|
||||
// WebSocket for real-time updates
|
||||
const ws = new WebSocket('ws://localhost:8000/ws');
|
||||
ws.onmessage = (event) => {
|
||||
const update = JSON.parse(event.data);
|
||||
// Handle real-time system updates
|
||||
};
|
||||
```
|
||||
|
||||
### Manual Control
|
||||
```bash
|
||||
# Start recording manually
|
||||
curl -X POST http://localhost:8000/cameras/camera1/start-recording
|
||||
|
||||
# Stop recording manually
|
||||
curl -X POST http://localhost:8000/cameras/camera1/stop-recording
|
||||
|
||||
# Get system status
|
||||
curl http://localhost:8000/system/status
|
||||
```
|
||||
|
||||
## 📊 System Capabilities
|
||||
|
||||
### Discovered Hardware
|
||||
- **2 GigE Cameras**: Blower-Yield-Cam, Cracker-Cam
|
||||
- **Network Ready**: Cameras accessible at 192.168.1.165, 192.168.1.167
|
||||
- **MQTT Ready**: Configured for broker at 192.168.1.110
|
||||
|
||||
### Recording Features
|
||||
- **Automatic Start/Stop**: Based on MQTT machine states
|
||||
- **Timezone Aware**: Atlanta time timestamps (EST/EDT)
|
||||
- **Organized Storage**: Separate directories per camera
|
||||
- **File Naming**: `camera1_recording_20250725_213000.avi`
|
||||
- **Manual Control**: API endpoints for manual recording
|
||||
|
||||
### Monitoring Features
|
||||
- **Real-time Status**: Camera and machine state monitoring
|
||||
- **Health Checks**: Automatic system health verification
|
||||
- **Performance Tracking**: Recording metrics and system stats
|
||||
- **Error Handling**: Comprehensive error tracking and recovery
|
||||
|
||||
## 🔧 Maintenance
|
||||
|
||||
### Regular Tasks
|
||||
- **Log Monitoring**: Check `usda_vision_system.log`
|
||||
- **Storage Cleanup**: Automatic cleanup of old recordings
|
||||
- **Time Sync**: Automatic NTP synchronization
|
||||
- **Health Checks**: Built-in system monitoring
|
||||
|
||||
### Troubleshooting
|
||||
- **Test Suite**: `python test_system.py`
|
||||
- **Time Check**: `python check_time.py`
|
||||
- **API Health**: `curl http://localhost:8000/health`
|
||||
- **Debug Mode**: `python main.py --log-level DEBUG`
|
||||
|
||||
## 🎯 Production Readiness
|
||||
|
||||
### ✅ All Tests Passing
|
||||
- System initialization: ✅
|
||||
- Camera discovery: ✅ (2 cameras found)
|
||||
- MQTT configuration: ✅
|
||||
- Storage setup: ✅
|
||||
- Time synchronization: ✅
|
||||
- API endpoints: ✅
|
||||
|
||||
### ✅ Documentation Complete
|
||||
- Installation guide: ✅
|
||||
- Configuration reference: ✅
|
||||
- API documentation: ✅
|
||||
- Troubleshooting guide: ✅
|
||||
- Integration examples: ✅
|
||||
|
||||
### ✅ Production Features
|
||||
- Error handling: ✅
|
||||
- Logging system: ✅
|
||||
- Time synchronization: ✅
|
||||
- Storage management: ✅
|
||||
- API security: ✅
|
||||
- Performance monitoring: ✅
|
||||
|
||||
## 🚀 Next Steps
|
||||
|
||||
The system is now ready for:
|
||||
|
||||
1. **Production Deployment**: Deploy on target hardware
|
||||
2. **Dashboard Integration**: Connect to React + Supabase dashboard
|
||||
3. **MQTT Configuration**: Connect to production MQTT broker
|
||||
4. **Camera Calibration**: Fine-tune camera settings for production
|
||||
5. **Monitoring Setup**: Configure production monitoring and alerts
|
||||
|
||||
## 📞 Support
|
||||
|
||||
For ongoing support:
|
||||
- **Documentation**: Complete README.md with troubleshooting
|
||||
- **Test Suite**: Comprehensive diagnostic tools
|
||||
- **Logging**: Detailed system logs for debugging
|
||||
- **API Health**: Built-in health check endpoints
|
||||
|
||||
---
|
||||
|
||||
**🎊 PROJECT STATUS: COMPLETE AND PRODUCTION-READY! 🎊**
|
||||
|
||||
The USDA Vision Camera System is fully implemented, tested, and documented. All original requirements have been met, and the system is ready for production deployment with your React dashboard integration.
|
||||
|
||||
**Key Achievements:**
|
||||
- ✅ Dual MQTT topic monitoring
|
||||
- ✅ Automatic camera recording
|
||||
- ✅ Atlanta timezone synchronization
|
||||
- ✅ Complete REST API
|
||||
- ✅ Comprehensive documentation
|
||||
- ✅ Production-ready deployment
|
||||
65
API Documentations/docs/README.md
Normal file
65
API Documentations/docs/README.md
Normal file
@@ -0,0 +1,65 @@
|
||||
# USDA Vision Camera System - Documentation
|
||||
|
||||
This directory contains detailed documentation for the USDA Vision Camera System.
|
||||
|
||||
## Documentation Files
|
||||
|
||||
### 🚀 [API_DOCUMENTATION.md](API_DOCUMENTATION.md) **⭐ NEW**
|
||||
**Complete API reference documentation** covering all endpoints, features, and recent enhancements:
|
||||
- System status and health monitoring
|
||||
- Camera management and configuration
|
||||
- Recording control with dynamic settings
|
||||
- Auto-recording management
|
||||
- MQTT and machine status
|
||||
- Storage and file management
|
||||
- Camera recovery and diagnostics
|
||||
- Live streaming capabilities
|
||||
- WebSocket real-time updates
|
||||
- Quick start examples and migration notes
|
||||
|
||||
### ⚡ [API_QUICK_REFERENCE.md](API_QUICK_REFERENCE.md) **⭐ NEW**
|
||||
**Quick reference card** for the most commonly used API endpoints with curl examples and response formats.
|
||||
|
||||
### 📋 [PROJECT_COMPLETE.md](PROJECT_COMPLETE.md)
|
||||
Complete project overview and final status documentation. Contains:
|
||||
- Project completion status
|
||||
- Final system architecture
|
||||
- Deployment instructions
|
||||
- Production readiness checklist
|
||||
|
||||
### 🔧 [API_CHANGES_SUMMARY.md](API_CHANGES_SUMMARY.md)
|
||||
Summary of API changes and enhancements made to the system.
|
||||
|
||||
### 📷 [CAMERA_RECOVERY_GUIDE.md](CAMERA_RECOVERY_GUIDE.md)
|
||||
Guide for camera recovery procedures and troubleshooting camera-related issues.
|
||||
|
||||
### 📡 [MQTT_LOGGING_GUIDE.md](MQTT_LOGGING_GUIDE.md)
|
||||
Comprehensive guide for MQTT logging configuration and troubleshooting.
|
||||
|
||||
## Main Documentation
|
||||
|
||||
The main system documentation is located in the root directory:
|
||||
- **[../README.md](../README.md)** - Primary system documentation with installation, configuration, and usage instructions
|
||||
|
||||
## Additional Resources
|
||||
|
||||
### Demo Code
|
||||
- **[../demos/](../demos/)** - Demo scripts and camera SDK examples
|
||||
|
||||
### Test Files
|
||||
- **[../tests/](../tests/)** - Test scripts and legacy test files
|
||||
|
||||
### Jupyter Notebooks
|
||||
- **[../notebooks/](../notebooks/)** - Interactive notebooks for system exploration and testing
|
||||
|
||||
## Quick Links
|
||||
|
||||
- [System Installation](../README.md#installation)
|
||||
- [Configuration Guide](../README.md#configuration)
|
||||
- [API Documentation](../README.md#api-reference)
|
||||
- [Troubleshooting](../README.md#troubleshooting)
|
||||
- [Camera SDK Examples](../demos/camera_sdk_examples/)
|
||||
|
||||
## Support
|
||||
|
||||
For technical support and questions, refer to the main [README.md](../README.md) troubleshooting section or check the system logs.
|
||||
425
API Documentations/docs/api/CAMERA_CONFIG_API.md
Normal file
425
API Documentations/docs/api/CAMERA_CONFIG_API.md
Normal file
@@ -0,0 +1,425 @@
|
||||
# 🎛️ Camera Configuration API Guide
|
||||
|
||||
This guide explains how to configure camera settings via API endpoints, including all the advanced settings from your config.json.
|
||||
|
||||
> **Note**: This document is part of the comprehensive [USDA Vision Camera System API Documentation](../API_DOCUMENTATION.md). For complete API reference, see the main documentation.
|
||||
|
||||
## 📋 Configuration Categories
|
||||
|
||||
### ✅ **Real-time Configurable (No Restart Required)**
|
||||
These settings can be changed while the camera is active:
|
||||
|
||||
- **Basic**: `exposure_ms`, `gain`, `target_fps`
|
||||
- **Image Quality**: `sharpness`, `contrast`, `saturation`, `gamma`
|
||||
- **Color**: `auto_white_balance`, `color_temperature_preset`
|
||||
- **Advanced**: `anti_flicker_enabled`, `light_frequency`
|
||||
- **HDR**: `hdr_enabled`, `hdr_gain_mode`
|
||||
|
||||
### ⚠️ **Restart Required**
|
||||
These settings require camera restart to take effect:
|
||||
|
||||
- **Noise Reduction**: `noise_filter_enabled`, `denoise_3d_enabled`
|
||||
- **System**: `machine_topic`, `storage_path`, `enabled`, `bit_depth`
|
||||
|
||||
## 🔌 API Endpoints
|
||||
|
||||
### 1. Get Camera Configuration
|
||||
```http
|
||||
GET /cameras/{camera_name}/config
|
||||
```
|
||||
|
||||
**Response:**
|
||||
```json
|
||||
{
|
||||
"name": "camera1",
|
||||
"machine_topic": "vibratory_conveyor",
|
||||
"storage_path": "/storage/camera1",
|
||||
"enabled": true,
|
||||
"exposure_ms": 1.0,
|
||||
"gain": 3.5,
|
||||
"target_fps": 0,
|
||||
"sharpness": 120,
|
||||
"contrast": 110,
|
||||
"saturation": 100,
|
||||
"gamma": 100,
|
||||
"noise_filter_enabled": true,
|
||||
"denoise_3d_enabled": false,
|
||||
"auto_white_balance": true,
|
||||
"color_temperature_preset": 0,
|
||||
"anti_flicker_enabled": true,
|
||||
"light_frequency": 1,
|
||||
"bit_depth": 8,
|
||||
"hdr_enabled": false,
|
||||
"hdr_gain_mode": 0
|
||||
}
|
||||
```
|
||||
|
||||
### 2. Update Camera Configuration
|
||||
```http
|
||||
PUT /cameras/{camera_name}/config
|
||||
Content-Type: application/json
|
||||
```
|
||||
|
||||
**Request Body (all fields optional):**
|
||||
```json
|
||||
{
|
||||
"exposure_ms": 2.0,
|
||||
"gain": 4.0,
|
||||
"target_fps": 10.0,
|
||||
"sharpness": 150,
|
||||
"contrast": 120,
|
||||
"saturation": 110,
|
||||
"gamma": 90,
|
||||
"noise_filter_enabled": true,
|
||||
"denoise_3d_enabled": false,
|
||||
"auto_white_balance": false,
|
||||
"color_temperature_preset": 1,
|
||||
"anti_flicker_enabled": true,
|
||||
"light_frequency": 1,
|
||||
"hdr_enabled": false,
|
||||
"hdr_gain_mode": 0
|
||||
}
|
||||
```
|
||||
|
||||
**Response:**
|
||||
```json
|
||||
{
|
||||
"success": true,
|
||||
"message": "Camera camera1 configuration updated",
|
||||
"updated_settings": ["exposure_ms", "gain", "sharpness"]
|
||||
}
|
||||
```
|
||||
|
||||
### 3. Apply Configuration (Restart Camera)
|
||||
```http
|
||||
POST /cameras/{camera_name}/apply-config
|
||||
```
|
||||
|
||||
**Response:**
|
||||
```json
|
||||
{
|
||||
"success": true,
|
||||
"message": "Configuration applied to camera camera1"
|
||||
}
|
||||
```
|
||||
|
||||
## 📊 Setting Ranges and Descriptions
|
||||
|
||||
### Basic Settings
|
||||
| Setting | Range | Default | Description |
|
||||
|---------|-------|---------|-------------|
|
||||
| `exposure_ms` | 0.1 - 1000.0 | 1.0 | Exposure time in milliseconds |
|
||||
| `gain` | 0.0 - 20.0 | 3.5 | Camera gain multiplier |
|
||||
| `target_fps` | 0.0 - 120.0 | 0 | Target FPS (0 = maximum) |
|
||||
|
||||
### Image Quality Settings
|
||||
| Setting | Range | Default | Description |
|
||||
|---------|-------|---------|-------------|
|
||||
| `sharpness` | 0 - 200 | 100 | Image sharpness (100 = no sharpening) |
|
||||
| `contrast` | 0 - 200 | 100 | Image contrast (100 = normal) |
|
||||
| `saturation` | 0 - 200 | 100 | Color saturation (color cameras only) |
|
||||
| `gamma` | 0 - 300 | 100 | Gamma correction (100 = normal) |
|
||||
|
||||
### Color Settings
|
||||
| Setting | Values | Default | Description |
|
||||
|---------|--------|---------|-------------|
|
||||
| `auto_white_balance` | true/false | true | Automatic white balance |
|
||||
| `color_temperature_preset` | 0-10 | 0 | Color temperature preset (0=auto) |
|
||||
|
||||
### Advanced Settings
|
||||
| Setting | Values | Default | Description |
|
||||
|---------|--------|---------|-------------|
|
||||
| `anti_flicker_enabled` | true/false | true | Reduce artificial lighting flicker |
|
||||
| `light_frequency` | 0/1 | 1 | Light frequency (0=50Hz, 1=60Hz) |
|
||||
| `noise_filter_enabled` | true/false | true | Basic noise filtering |
|
||||
| `denoise_3d_enabled` | true/false | false | Advanced 3D denoising |
|
||||
|
||||
### HDR Settings
|
||||
| Setting | Values | Default | Description |
|
||||
|---------|--------|---------|-------------|
|
||||
| `hdr_enabled` | true/false | false | High Dynamic Range |
|
||||
| `hdr_gain_mode` | 0-3 | 0 | HDR processing mode |
|
||||
|
||||
## 🚀 Usage Examples
|
||||
|
||||
### Example 1: Adjust Exposure and Gain
|
||||
```bash
|
||||
curl -X PUT http://localhost:8000/cameras/camera1/config \
|
||||
-H "Content-Type: application/json" \
|
||||
-d '{
|
||||
"exposure_ms": 1.5,
|
||||
"gain": 4.0
|
||||
}'
|
||||
```
|
||||
|
||||
### Example 2: Improve Image Quality
|
||||
```bash
|
||||
curl -X PUT http://localhost:8000/cameras/camera1/config \
|
||||
-H "Content-Type: application/json" \
|
||||
-d '{
|
||||
"sharpness": 150,
|
||||
"contrast": 120,
|
||||
"gamma": 90
|
||||
}'
|
||||
```
|
||||
|
||||
### Example 3: Configure for Indoor Lighting
|
||||
```bash
|
||||
curl -X PUT http://localhost:8000/cameras/camera1/config \
|
||||
-H "Content-Type: application/json" \
|
||||
-d '{
|
||||
"anti_flicker_enabled": true,
|
||||
"light_frequency": 1,
|
||||
"auto_white_balance": false,
|
||||
"color_temperature_preset": 2
|
||||
}'
|
||||
```
|
||||
|
||||
### Example 4: Enable HDR Mode
|
||||
```bash
|
||||
curl -X PUT http://localhost:8000/cameras/camera1/config \
|
||||
-H "Content-Type: application/json" \
|
||||
-d '{
|
||||
"hdr_enabled": true,
|
||||
"hdr_gain_mode": 1
|
||||
}'
|
||||
```
|
||||
|
||||
## ⚛️ React Integration Examples
|
||||
|
||||
### Camera Configuration Component
|
||||
```jsx
|
||||
import React, { useState, useEffect } from 'react';
|
||||
|
||||
const CameraConfig = ({ cameraName, apiBaseUrl = 'http://localhost:8000' }) => {
|
||||
const [config, setConfig] = useState(null);
|
||||
const [loading, setLoading] = useState(false);
|
||||
const [error, setError] = useState(null);
|
||||
|
||||
// Load current configuration
|
||||
useEffect(() => {
|
||||
fetchConfig();
|
||||
}, [cameraName]);
|
||||
|
||||
const fetchConfig = async () => {
|
||||
try {
|
||||
const response = await fetch(`${apiBaseUrl}/cameras/${cameraName}/config`);
|
||||
if (response.ok) {
|
||||
const data = await response.json();
|
||||
setConfig(data);
|
||||
} else {
|
||||
setError('Failed to load configuration');
|
||||
}
|
||||
} catch (err) {
|
||||
setError(`Error: ${err.message}`);
|
||||
}
|
||||
};
|
||||
|
||||
const updateConfig = async (updates) => {
|
||||
setLoading(true);
|
||||
try {
|
||||
const response = await fetch(`${apiBaseUrl}/cameras/${cameraName}/config`, {
|
||||
method: 'PUT',
|
||||
headers: { 'Content-Type': 'application/json' },
|
||||
body: JSON.stringify(updates)
|
||||
});
|
||||
|
||||
if (response.ok) {
|
||||
const result = await response.json();
|
||||
console.log('Updated settings:', result.updated_settings);
|
||||
await fetchConfig(); // Reload configuration
|
||||
} else {
|
||||
const error = await response.json();
|
||||
setError(error.detail || 'Update failed');
|
||||
}
|
||||
} catch (err) {
|
||||
setError(`Error: ${err.message}`);
|
||||
} finally {
|
||||
setLoading(false);
|
||||
}
|
||||
};
|
||||
|
||||
const handleSliderChange = (setting, value) => {
|
||||
updateConfig({ [setting]: value });
|
||||
};
|
||||
|
||||
if (!config) return <div>Loading configuration...</div>;
|
||||
|
||||
return (
|
||||
<div className="camera-config">
|
||||
<h3>Camera Configuration: {cameraName}</h3>
|
||||
|
||||
{/* Basic Settings */}
|
||||
<div className="config-section">
|
||||
<h4>Basic Settings</h4>
|
||||
|
||||
<div className="setting">
|
||||
<label>Exposure (ms): {config.exposure_ms}</label>
|
||||
<input
|
||||
type="range"
|
||||
min="0.1"
|
||||
max="10"
|
||||
step="0.1"
|
||||
value={config.exposure_ms}
|
||||
onChange={(e) => handleSliderChange('exposure_ms', parseFloat(e.target.value))}
|
||||
/>
|
||||
</div>
|
||||
|
||||
<div className="setting">
|
||||
<label>Gain: {config.gain}</label>
|
||||
<input
|
||||
type="range"
|
||||
min="0"
|
||||
max="10"
|
||||
step="0.1"
|
||||
value={config.gain}
|
||||
onChange={(e) => handleSliderChange('gain', parseFloat(e.target.value))}
|
||||
/>
|
||||
</div>
|
||||
|
||||
<div className="setting">
|
||||
<label>Target FPS: {config.target_fps}</label>
|
||||
<input
|
||||
type="range"
|
||||
min="0"
|
||||
max="30"
|
||||
step="1"
|
||||
value={config.target_fps}
|
||||
onChange={(e) => handleSliderChange('target_fps', parseInt(e.target.value))}
|
||||
/>
|
||||
</div>
|
||||
</div>
|
||||
|
||||
{/* Image Quality Settings */}
|
||||
<div className="config-section">
|
||||
<h4>Image Quality</h4>
|
||||
|
||||
<div className="setting">
|
||||
<label>Sharpness: {config.sharpness}</label>
|
||||
<input
|
||||
type="range"
|
||||
min="0"
|
||||
max="200"
|
||||
value={config.sharpness}
|
||||
onChange={(e) => handleSliderChange('sharpness', parseInt(e.target.value))}
|
||||
/>
|
||||
</div>
|
||||
|
||||
<div className="setting">
|
||||
<label>Contrast: {config.contrast}</label>
|
||||
<input
|
||||
type="range"
|
||||
min="0"
|
||||
max="200"
|
||||
value={config.contrast}
|
||||
onChange={(e) => handleSliderChange('contrast', parseInt(e.target.value))}
|
||||
/>
|
||||
</div>
|
||||
|
||||
<div className="setting">
|
||||
<label>Gamma: {config.gamma}</label>
|
||||
<input
|
||||
type="range"
|
||||
min="0"
|
||||
max="300"
|
||||
value={config.gamma}
|
||||
onChange={(e) => handleSliderChange('gamma', parseInt(e.target.value))}
|
||||
/>
|
||||
</div>
|
||||
</div>
|
||||
|
||||
{/* Advanced Settings */}
|
||||
<div className="config-section">
|
||||
<h4>Advanced Settings</h4>
|
||||
|
||||
<div className="setting">
|
||||
<label>
|
||||
<input
|
||||
type="checkbox"
|
||||
checked={config.anti_flicker_enabled}
|
||||
onChange={(e) => updateConfig({ anti_flicker_enabled: e.target.checked })}
|
||||
/>
|
||||
Anti-flicker Enabled
|
||||
</label>
|
||||
</div>
|
||||
|
||||
<div className="setting">
|
||||
<label>
|
||||
<input
|
||||
type="checkbox"
|
||||
checked={config.auto_white_balance}
|
||||
onChange={(e) => updateConfig({ auto_white_balance: e.target.checked })}
|
||||
/>
|
||||
Auto White Balance
|
||||
</label>
|
||||
</div>
|
||||
|
||||
<div className="setting">
|
||||
<label>
|
||||
<input
|
||||
type="checkbox"
|
||||
checked={config.hdr_enabled}
|
||||
onChange={(e) => updateConfig({ hdr_enabled: e.target.checked })}
|
||||
/>
|
||||
HDR Enabled
|
||||
</label>
|
||||
</div>
|
||||
</div>
|
||||
|
||||
{error && (
|
||||
<div className="error" style={{ color: 'red', marginTop: '10px' }}>
|
||||
{error}
|
||||
</div>
|
||||
)}
|
||||
|
||||
{loading && <div>Updating configuration...</div>}
|
||||
</div>
|
||||
);
|
||||
};
|
||||
|
||||
export default CameraConfig;
|
||||
```
|
||||
|
||||
## 🔄 Configuration Workflow
|
||||
|
||||
### 1. Real-time Adjustments
|
||||
For settings that don't require restart:
|
||||
```bash
|
||||
# Update settings
|
||||
curl -X PUT /cameras/camera1/config -d '{"exposure_ms": 2.0}'
|
||||
|
||||
# Settings take effect immediately
|
||||
# Continue recording/streaming without interruption
|
||||
```
|
||||
|
||||
### 2. Settings Requiring Restart
|
||||
For noise reduction and system settings:
|
||||
```bash
|
||||
# Update settings
|
||||
curl -X PUT /cameras/camera1/config -d '{"noise_filter_enabled": false}'
|
||||
|
||||
# Apply configuration (restarts camera)
|
||||
curl -X POST /cameras/camera1/apply-config
|
||||
|
||||
# Camera reinitializes with new settings
|
||||
```
|
||||
|
||||
## 🚨 Important Notes
|
||||
|
||||
### Camera State During Updates
|
||||
- **Real-time settings**: Applied immediately, no interruption
|
||||
- **Restart-required settings**: Saved to config, applied on next restart
|
||||
- **Recording**: Continues during real-time updates
|
||||
- **Streaming**: Continues during real-time updates
|
||||
|
||||
### Error Handling
|
||||
- Invalid ranges return HTTP 422 with validation errors
|
||||
- Camera not found returns HTTP 404
|
||||
- SDK errors are logged and return HTTP 500
|
||||
|
||||
### Performance Impact
|
||||
- **Image quality settings**: Minimal performance impact
|
||||
- **Noise reduction**: May reduce FPS when enabled
|
||||
- **HDR**: Significant processing overhead when enabled
|
||||
|
||||
This comprehensive API allows you to control all camera settings programmatically, making it perfect for integration with React dashboards or automated optimization systems!
|
||||
262
API Documentations/docs/features/AUTO_RECORDING_FEATURE_GUIDE.md
Normal file
262
API Documentations/docs/features/AUTO_RECORDING_FEATURE_GUIDE.md
Normal file
@@ -0,0 +1,262 @@
|
||||
# Auto-Recording Feature Implementation Guide
|
||||
|
||||
## 🎯 Overview for React App Development
|
||||
|
||||
This document provides a comprehensive guide for updating the React application to support the new auto-recording feature that was added to the USDA Vision Camera System.
|
||||
|
||||
> **📚 For complete API reference**: See the [USDA Vision Camera System API Documentation](../API_DOCUMENTATION.md) for detailed endpoint specifications and examples.
|
||||
|
||||
## 📋 What Changed in the Backend
|
||||
|
||||
### New API Endpoints Added
|
||||
|
||||
1. **Enable Auto-Recording**
|
||||
```http
|
||||
POST /cameras/{camera_name}/auto-recording/enable
|
||||
Response: AutoRecordingConfigResponse
|
||||
```
|
||||
|
||||
2. **Disable Auto-Recording**
|
||||
```http
|
||||
POST /cameras/{camera_name}/auto-recording/disable
|
||||
Response: AutoRecordingConfigResponse
|
||||
```
|
||||
|
||||
3. **Get Auto-Recording Status**
|
||||
```http
|
||||
GET /auto-recording/status
|
||||
Response: AutoRecordingStatusResponse
|
||||
```
|
||||
|
||||
### Updated API Responses
|
||||
|
||||
#### CameraStatusResponse (Updated)
|
||||
```typescript
|
||||
interface CameraStatusResponse {
|
||||
name: string;
|
||||
status: string;
|
||||
is_recording: boolean;
|
||||
last_checked: string;
|
||||
last_error?: string;
|
||||
device_info?: any;
|
||||
current_recording_file?: string;
|
||||
recording_start_time?: string;
|
||||
|
||||
// NEW AUTO-RECORDING FIELDS
|
||||
auto_recording_enabled: boolean;
|
||||
auto_recording_active: boolean;
|
||||
auto_recording_failure_count: number;
|
||||
auto_recording_last_attempt?: string;
|
||||
auto_recording_last_error?: string;
|
||||
}
|
||||
```
|
||||
|
||||
#### CameraConfigResponse (Updated)
|
||||
```typescript
|
||||
interface CameraConfigResponse {
|
||||
name: string;
|
||||
machine_topic: string;
|
||||
storage_path: string;
|
||||
enabled: boolean;
|
||||
|
||||
// NEW AUTO-RECORDING CONFIG FIELDS
|
||||
auto_start_recording_enabled: boolean;
|
||||
auto_recording_max_retries: number;
|
||||
auto_recording_retry_delay_seconds: number;
|
||||
|
||||
// ... existing fields (exposure_ms, gain, etc.)
|
||||
}
|
||||
```
|
||||
|
||||
#### New Response Types
|
||||
```typescript
|
||||
interface AutoRecordingConfigResponse {
|
||||
success: boolean;
|
||||
message: string;
|
||||
camera_name: string;
|
||||
enabled: boolean;
|
||||
}
|
||||
|
||||
interface AutoRecordingStatusResponse {
|
||||
running: boolean;
|
||||
auto_recording_enabled: boolean;
|
||||
retry_queue: Record<string, any>;
|
||||
enabled_cameras: string[];
|
||||
}
|
||||
```
|
||||
|
||||
## 🎨 React App UI Requirements
|
||||
|
||||
### 1. Camera Status Display Updates
|
||||
|
||||
**Add to Camera Cards/Components:**
|
||||
- Auto-recording enabled/disabled indicator
|
||||
- Auto-recording active status (when machine is ON and auto-recording)
|
||||
- Failure count display (if > 0)
|
||||
- Last auto-recording error (if any)
|
||||
- Visual distinction between manual and auto-recording
|
||||
|
||||
**Example UI Elements:**
|
||||
```jsx
|
||||
// Auto-recording status badge
|
||||
{camera.auto_recording_enabled && (
|
||||
<Badge variant={camera.auto_recording_active ? "success" : "secondary"}>
|
||||
Auto-Recording {camera.auto_recording_active ? "Active" : "Enabled"}
|
||||
</Badge>
|
||||
)}
|
||||
|
||||
// Failure indicator
|
||||
{camera.auto_recording_failure_count > 0 && (
|
||||
<Alert variant="warning">
|
||||
Auto-recording failures: {camera.auto_recording_failure_count}
|
||||
</Alert>
|
||||
)}
|
||||
```
|
||||
|
||||
### 2. Auto-Recording Controls
|
||||
|
||||
**Add Toggle Controls:**
|
||||
- Enable/Disable auto-recording per camera
|
||||
- Global auto-recording status display
|
||||
- Retry queue monitoring
|
||||
|
||||
**Example Control Component:**
|
||||
```jsx
|
||||
const AutoRecordingToggle = ({ camera, onToggle }) => {
|
||||
const handleToggle = async () => {
|
||||
const endpoint = camera.auto_recording_enabled ? 'disable' : 'enable';
|
||||
await fetch(`/cameras/${camera.name}/auto-recording/${endpoint}`, {
|
||||
method: 'POST'
|
||||
});
|
||||
onToggle();
|
||||
};
|
||||
|
||||
return (
|
||||
<Switch
|
||||
checked={camera.auto_recording_enabled}
|
||||
onChange={handleToggle}
|
||||
label="Auto-Recording"
|
||||
/>
|
||||
);
|
||||
};
|
||||
```
|
||||
|
||||
### 3. Machine State Integration
|
||||
|
||||
**Display Machine Status:**
|
||||
- Show which machine each camera monitors
|
||||
- Display current machine state (ON/OFF)
|
||||
- Show correlation between machine state and recording status
|
||||
|
||||
**Camera-Machine Mapping:**
|
||||
- Camera 1 → Vibratory Conveyor (conveyor/cracker cam)
|
||||
- Camera 2 → Blower Separator (blower separator)
|
||||
|
||||
### 4. Auto-Recording Dashboard
|
||||
|
||||
**Create New Dashboard Section:**
|
||||
- Overall auto-recording system status
|
||||
- List of cameras with auto-recording enabled
|
||||
- Active retry queue display
|
||||
- Recent auto-recording events/logs
|
||||
|
||||
## 🔧 Implementation Steps for React App
|
||||
|
||||
### Step 1: Update TypeScript Interfaces
|
||||
```typescript
|
||||
// Update existing interfaces in your types file
|
||||
// Add new interfaces for auto-recording responses
|
||||
```
|
||||
|
||||
### Step 2: Update API Service Functions
|
||||
```typescript
|
||||
// Add new API calls
|
||||
export const enableAutoRecording = (cameraName: string) =>
|
||||
fetch(`/cameras/${cameraName}/auto-recording/enable`, { method: 'POST' });
|
||||
|
||||
export const disableAutoRecording = (cameraName: string) =>
|
||||
fetch(`/cameras/${cameraName}/auto-recording/disable`, { method: 'POST' });
|
||||
|
||||
export const getAutoRecordingStatus = () =>
|
||||
fetch('/auto-recording/status').then(res => res.json());
|
||||
```
|
||||
|
||||
### Step 3: Update Camera Components
|
||||
- Add auto-recording status indicators
|
||||
- Add enable/disable controls
|
||||
- Update recording status display to distinguish auto vs manual
|
||||
|
||||
### Step 4: Create Auto-Recording Management Panel
|
||||
- System-wide auto-recording status
|
||||
- Per-camera auto-recording controls
|
||||
- Retry queue monitoring
|
||||
- Error reporting and alerts
|
||||
|
||||
### Step 5: Update State Management
|
||||
```typescript
|
||||
// Add auto-recording state to your store/context
|
||||
interface AppState {
|
||||
cameras: CameraStatusResponse[];
|
||||
autoRecordingStatus: AutoRecordingStatusResponse;
|
||||
// ... existing state
|
||||
}
|
||||
```
|
||||
|
||||
## 🎯 Key User Experience Considerations
|
||||
|
||||
### Visual Indicators
|
||||
1. **Recording Status Hierarchy:**
|
||||
- Manual Recording (highest priority - red/prominent)
|
||||
- Auto-Recording Active (green/secondary)
|
||||
- Auto-Recording Enabled but Inactive (blue/subtle)
|
||||
- Auto-Recording Disabled (gray/muted)
|
||||
|
||||
2. **Machine State Correlation:**
|
||||
- Show machine ON/OFF status next to camera
|
||||
- Indicate when auto-recording should be active
|
||||
- Alert if machine is ON but auto-recording failed
|
||||
|
||||
3. **Error Handling:**
|
||||
- Clear error messages for auto-recording failures
|
||||
- Retry count display
|
||||
- Last attempt timestamp
|
||||
- Quick retry/reset options
|
||||
|
||||
### User Controls
|
||||
1. **Quick Actions:**
|
||||
- Toggle auto-recording per camera
|
||||
- Force retry failed auto-recording
|
||||
- Override auto-recording (manual control)
|
||||
|
||||
2. **Configuration:**
|
||||
- Adjust retry settings
|
||||
- Change machine-camera mappings
|
||||
- Set recording parameters for auto-recording
|
||||
|
||||
## 🚨 Important Notes
|
||||
|
||||
### Behavior Rules
|
||||
1. **Manual Override:** Manual recording always takes precedence over auto-recording
|
||||
2. **Non-Blocking:** Auto-recording status checks don't interfere with camera operation
|
||||
3. **Machine Correlation:** Auto-recording only activates when the associated machine turns ON
|
||||
4. **Failure Handling:** Failed auto-recording attempts are retried automatically with exponential backoff
|
||||
|
||||
### API Polling Recommendations
|
||||
- Poll camera status every 2-3 seconds for real-time updates
|
||||
- Poll auto-recording status every 5-10 seconds
|
||||
- Use WebSocket connections if available for real-time machine state updates
|
||||
|
||||
## 📱 Mobile Considerations
|
||||
- Auto-recording controls should be easily accessible on mobile
|
||||
- Status indicators should be clear and readable on small screens
|
||||
- Consider collapsible sections for detailed auto-recording information
|
||||
|
||||
## 🔍 Testing Checklist
|
||||
- [ ] Auto-recording toggle works for each camera
|
||||
- [ ] Status updates reflect machine state changes
|
||||
- [ ] Error states are clearly displayed
|
||||
- [ ] Manual recording overrides auto-recording
|
||||
- [ ] Retry mechanism is visible to users
|
||||
- [ ] Mobile interface is functional
|
||||
|
||||
This guide provides everything needed to update the React app to fully support the new auto-recording feature!
|
||||
158
API Documentations/docs/guides/CAMERA_RECOVERY_GUIDE.md
Normal file
158
API Documentations/docs/guides/CAMERA_RECOVERY_GUIDE.md
Normal file
@@ -0,0 +1,158 @@
|
||||
# Camera Recovery and Diagnostics Guide
|
||||
|
||||
This guide explains the new camera recovery functionality implemented in the USDA Vision Camera System API.
|
||||
|
||||
## Overview
|
||||
|
||||
The system now includes comprehensive camera recovery capabilities to handle connection issues, initialization failures, and other camera-related problems. These features use the underlying mvsdk (python demo) library functions to perform various recovery operations.
|
||||
|
||||
## Available Recovery Operations
|
||||
|
||||
### 1. Connection Test (`/cameras/{camera_name}/test-connection`)
|
||||
- **Purpose**: Test if the camera connection is working
|
||||
- **SDK Function**: `CameraConnectTest()`
|
||||
- **Use Case**: Diagnose connection issues
|
||||
- **HTTP Method**: POST
|
||||
- **Response**: `CameraTestResponse`
|
||||
|
||||
### 2. Reconnect (`/cameras/{camera_name}/reconnect`)
|
||||
- **Purpose**: Soft reconnection to the camera
|
||||
- **SDK Function**: `CameraReConnect()`
|
||||
- **Use Case**: Most common fix for connection issues
|
||||
- **HTTP Method**: POST
|
||||
- **Response**: `CameraRecoveryResponse`
|
||||
|
||||
### 3. Restart Grab (`/cameras/{camera_name}/restart-grab`)
|
||||
- **Purpose**: Restart the camera grab process
|
||||
- **SDK Function**: `CameraRestartGrab()`
|
||||
- **Use Case**: Fix issues with image capture
|
||||
- **HTTP Method**: POST
|
||||
- **Response**: `CameraRecoveryResponse`
|
||||
|
||||
### 4. Reset Timestamp (`/cameras/{camera_name}/reset-timestamp`)
|
||||
- **Purpose**: Reset camera timestamp
|
||||
- **SDK Function**: `CameraRstTimeStamp()`
|
||||
- **Use Case**: Fix timing-related issues
|
||||
- **HTTP Method**: POST
|
||||
- **Response**: `CameraRecoveryResponse`
|
||||
|
||||
### 5. Full Reset (`/cameras/{camera_name}/full-reset`)
|
||||
- **Purpose**: Complete camera reset (uninitialize and reinitialize)
|
||||
- **SDK Functions**: `CameraUnInit()` + `CameraInit()`
|
||||
- **Use Case**: Hard reset for persistent issues
|
||||
- **HTTP Method**: POST
|
||||
- **Response**: `CameraRecoveryResponse`
|
||||
|
||||
### 6. Reinitialize (`/cameras/{camera_name}/reinitialize`)
|
||||
- **Purpose**: Reinitialize cameras that failed initial setup
|
||||
- **SDK Functions**: Complete recorder recreation
|
||||
- **Use Case**: Cameras that never initialized properly
|
||||
- **HTTP Method**: POST
|
||||
- **Response**: `CameraRecoveryResponse`
|
||||
|
||||
## Recommended Troubleshooting Workflow
|
||||
|
||||
When a camera has issues, follow this order:
|
||||
|
||||
1. **Test Connection** - Diagnose the problem
|
||||
```http
|
||||
POST http://localhost:8000/cameras/camera1/test-connection
|
||||
```
|
||||
|
||||
2. **Try Reconnect** - Most common fix
|
||||
```http
|
||||
POST http://localhost:8000/cameras/camera1/reconnect
|
||||
```
|
||||
|
||||
3. **Restart Grab** - If reconnect doesn't work
|
||||
```http
|
||||
POST http://localhost:8000/cameras/camera1/restart-grab
|
||||
```
|
||||
|
||||
4. **Full Reset** - For persistent issues
|
||||
```http
|
||||
POST http://localhost:8000/cameras/camera1/full-reset
|
||||
```
|
||||
|
||||
5. **Reinitialize** - For cameras that never worked
|
||||
```http
|
||||
POST http://localhost:8000/cameras/camera1/reinitialize
|
||||
```
|
||||
|
||||
## Response Format
|
||||
|
||||
All recovery operations return structured responses:
|
||||
|
||||
### CameraTestResponse
|
||||
```json
|
||||
{
|
||||
"success": true,
|
||||
"message": "Camera camera1 connection test passed",
|
||||
"camera_name": "camera1",
|
||||
"timestamp": "2024-01-01T12:00:00"
|
||||
}
|
||||
```
|
||||
|
||||
### CameraRecoveryResponse
|
||||
```json
|
||||
{
|
||||
"success": true,
|
||||
"message": "Camera camera1 reconnected successfully",
|
||||
"camera_name": "camera1",
|
||||
"operation": "reconnect",
|
||||
"timestamp": "2024-01-01T12:00:00"
|
||||
}
|
||||
```
|
||||
|
||||
## Implementation Details
|
||||
|
||||
### CameraRecorder Methods
|
||||
- `test_connection()`: Tests camera connection
|
||||
- `reconnect()`: Performs soft reconnection
|
||||
- `restart_grab()`: Restarts grab process
|
||||
- `reset_timestamp()`: Resets timestamp
|
||||
- `full_reset()`: Complete reset with cleanup and reinitialization
|
||||
|
||||
### CameraManager Methods
|
||||
- `test_camera_connection(camera_name)`: Test specific camera
|
||||
- `reconnect_camera(camera_name)`: Reconnect specific camera
|
||||
- `restart_camera_grab(camera_name)`: Restart grab for specific camera
|
||||
- `reset_camera_timestamp(camera_name)`: Reset timestamp for specific camera
|
||||
- `full_reset_camera(camera_name)`: Full reset for specific camera
|
||||
- `reinitialize_failed_camera(camera_name)`: Reinitialize failed camera
|
||||
|
||||
### State Management
|
||||
All recovery operations automatically update the camera status in the state manager:
|
||||
- Success: Status set to "connected"
|
||||
- Failure: Status set to appropriate error state with error message
|
||||
|
||||
## Error Handling
|
||||
|
||||
The system includes comprehensive error handling:
|
||||
- SDK exceptions are caught and logged
|
||||
- State manager is updated with error information
|
||||
- Proper HTTP status codes are returned
|
||||
- Detailed error messages are provided
|
||||
|
||||
## Testing
|
||||
|
||||
Use the provided test files:
|
||||
- `api-tests.http`: Manual API testing with VS Code REST Client
|
||||
- `test_camera_recovery_api.py`: Automated testing script
|
||||
|
||||
## Safety Features
|
||||
|
||||
- Recording is automatically stopped before recovery operations
|
||||
- Camera resources are properly cleaned up
|
||||
- Thread-safe operations with proper locking
|
||||
- Graceful error handling prevents system crashes
|
||||
|
||||
## Common Use Cases
|
||||
|
||||
1. **Camera Lost Connection**: Use reconnect
|
||||
2. **Camera Won't Capture**: Use restart-grab
|
||||
3. **Camera Initialization Failed**: Use reinitialize
|
||||
4. **Persistent Issues**: Use full-reset
|
||||
5. **Timing Problems**: Use reset-timestamp
|
||||
|
||||
This recovery system provides robust tools to handle most camera-related issues without requiring system restart or manual intervention.
|
||||
187
API Documentations/docs/guides/MQTT_LOGGING_GUIDE.md
Normal file
187
API Documentations/docs/guides/MQTT_LOGGING_GUIDE.md
Normal file
@@ -0,0 +1,187 @@
|
||||
# MQTT Console Logging & API Guide
|
||||
|
||||
## 🎯 Overview
|
||||
|
||||
Your USDA Vision Camera System now has **enhanced MQTT console logging** and **comprehensive API endpoints** for monitoring machine status via MQTT.
|
||||
|
||||
## ✨ What's New
|
||||
|
||||
### 1. **Enhanced Console Logging**
|
||||
- **Colorful emoji-based console output** for all MQTT events
|
||||
- **Real-time visibility** of MQTT connections, subscriptions, and messages
|
||||
- **Clear status indicators** for debugging and monitoring
|
||||
|
||||
### 2. **New MQTT Status API Endpoint**
|
||||
- **GET /mqtt/status** - Detailed MQTT client statistics
|
||||
- **Message counts, error tracking, uptime monitoring**
|
||||
- **Real-time connection status and broker information**
|
||||
|
||||
### 3. **Existing Machine Status APIs** (already available)
|
||||
- **GET /machines** - All machine states from MQTT
|
||||
- **GET /system/status** - Overall system status including MQTT
|
||||
|
||||
## 🖥️ Console Logging Examples
|
||||
|
||||
When you run the system, you'll see:
|
||||
|
||||
```bash
|
||||
🔗 MQTT CONNECTED: 192.168.1.110:1883
|
||||
📋 MQTT SUBSCRIBED: vibratory_conveyor → vision/vibratory_conveyor/state
|
||||
📋 MQTT SUBSCRIBED: blower_separator → vision/blower_separator/state
|
||||
📡 MQTT MESSAGE: vibratory_conveyor → on
|
||||
📡 MQTT MESSAGE: blower_separator → off
|
||||
⚠️ MQTT DISCONNECTED: Unexpected disconnection (code: 1)
|
||||
🔗 MQTT CONNECTED: 192.168.1.110:1883
|
||||
```
|
||||
|
||||
## 🌐 API Endpoints
|
||||
|
||||
### MQTT Status
|
||||
```http
|
||||
GET http://localhost:8000/mqtt/status
|
||||
```
|
||||
|
||||
**Response:**
|
||||
```json
|
||||
{
|
||||
"connected": true,
|
||||
"broker_host": "192.168.1.110",
|
||||
"broker_port": 1883,
|
||||
"subscribed_topics": [
|
||||
"vision/vibratory_conveyor/state",
|
||||
"vision/blower_separator/state"
|
||||
],
|
||||
"last_message_time": "2025-07-28T12:00:00",
|
||||
"message_count": 42,
|
||||
"error_count": 0,
|
||||
"uptime_seconds": 3600.5
|
||||
}
|
||||
```
|
||||
|
||||
### Machine Status
|
||||
```http
|
||||
GET http://localhost:8000/machines
|
||||
```
|
||||
|
||||
**Response:**
|
||||
```json
|
||||
{
|
||||
"vibratory_conveyor": {
|
||||
"name": "vibratory_conveyor",
|
||||
"state": "on",
|
||||
"last_updated": "2025-07-28T12:00:00",
|
||||
"last_message": "on",
|
||||
"mqtt_topic": "vision/vibratory_conveyor/state"
|
||||
},
|
||||
"blower_separator": {
|
||||
"name": "blower_separator",
|
||||
"state": "off",
|
||||
"last_updated": "2025-07-28T12:00:00",
|
||||
"last_message": "off",
|
||||
"mqtt_topic": "vision/blower_separator/state"
|
||||
}
|
||||
}
|
||||
```
|
||||
|
||||
### System Status
|
||||
```http
|
||||
GET http://localhost:8000/system/status
|
||||
```
|
||||
|
||||
**Response:**
|
||||
```json
|
||||
{
|
||||
"system_started": true,
|
||||
"mqtt_connected": true,
|
||||
"last_mqtt_message": "2025-07-28T12:00:00",
|
||||
"machines": { ... },
|
||||
"cameras": { ... },
|
||||
"active_recordings": 0,
|
||||
"total_recordings": 5,
|
||||
"uptime_seconds": 3600.5
|
||||
}
|
||||
```
|
||||
|
||||
## 🚀 How to Use
|
||||
|
||||
### 1. **Start the Full System**
|
||||
```bash
|
||||
python main.py
|
||||
```
|
||||
You'll see enhanced console logging for all MQTT events.
|
||||
|
||||
### 2. **Test MQTT Demo (MQTT only)**
|
||||
```bash
|
||||
python demo_mqtt_console.py
|
||||
```
|
||||
Shows just the MQTT client with enhanced logging.
|
||||
|
||||
### 3. **Test API Endpoints**
|
||||
```bash
|
||||
python test_mqtt_logging.py
|
||||
```
|
||||
Tests all the API endpoints and shows expected responses.
|
||||
|
||||
### 4. **Query APIs Directly**
|
||||
```bash
|
||||
# Check MQTT status
|
||||
curl http://localhost:8000/mqtt/status
|
||||
|
||||
# Check machine states
|
||||
curl http://localhost:8000/machines
|
||||
|
||||
# Check overall system status
|
||||
curl http://localhost:8000/system/status
|
||||
```
|
||||
|
||||
## 🔧 Configuration
|
||||
|
||||
The MQTT settings are in `config.json`:
|
||||
|
||||
```json
|
||||
{
|
||||
"mqtt": {
|
||||
"broker_host": "192.168.1.110",
|
||||
"broker_port": 1883,
|
||||
"username": null,
|
||||
"password": null,
|
||||
"topics": {
|
||||
"vibratory_conveyor": "vision/vibratory_conveyor/state",
|
||||
"blower_separator": "vision/blower_separator/state"
|
||||
}
|
||||
}
|
||||
}
|
||||
```
|
||||
|
||||
## 🎨 Console Output Features
|
||||
|
||||
- **🔗 Connection Events**: Green for successful connections
|
||||
- **📋 Subscriptions**: Blue for topic subscriptions
|
||||
- **📡 Messages**: Real-time message display with machine name and payload
|
||||
- **⚠️ Warnings**: Yellow for unexpected disconnections
|
||||
- **❌ Errors**: Red for connection failures and errors
|
||||
- **❓ Unknown Topics**: Purple for unrecognized MQTT topics
|
||||
|
||||
## 📊 Monitoring & Debugging
|
||||
|
||||
### Real-time Monitoring
|
||||
- **Console**: Watch live MQTT events as they happen
|
||||
- **API**: Query `/mqtt/status` for statistics and health
|
||||
- **Logs**: Check `usda_vision_system.log` for detailed logs
|
||||
|
||||
### Troubleshooting
|
||||
1. **No MQTT messages?** Check broker connectivity and topic configuration
|
||||
2. **Connection issues?** Verify broker host/port in config.json
|
||||
3. **API not responding?** Ensure the system is running with `python main.py`
|
||||
|
||||
## 🎯 Use Cases
|
||||
|
||||
1. **Development**: See MQTT messages in real-time while developing
|
||||
2. **Debugging**: Identify connection issues and message patterns
|
||||
3. **Monitoring**: Use APIs to build dashboards or monitoring tools
|
||||
4. **Integration**: Query machine states from external applications
|
||||
5. **Maintenance**: Track MQTT statistics and error rates
|
||||
|
||||
---
|
||||
|
||||
**🎉 Your MQTT monitoring is now fully enhanced with both console logging and comprehensive APIs!**
|
||||
240
API Documentations/docs/guides/STREAMING_GUIDE.md
Normal file
240
API Documentations/docs/guides/STREAMING_GUIDE.md
Normal file
@@ -0,0 +1,240 @@
|
||||
# 🎥 USDA Vision Camera Live Streaming Guide
|
||||
|
||||
This guide explains how to use the new live preview streaming functionality that allows you to view camera feeds in real-time without blocking recording operations.
|
||||
|
||||
## 🌟 Key Features
|
||||
|
||||
- **Non-blocking streaming**: Live preview doesn't interfere with recording
|
||||
- **Separate camera connections**: Streaming uses independent camera instances
|
||||
- **MJPEG streaming**: Standard web-compatible video streaming
|
||||
- **Multiple concurrent viewers**: Multiple browsers can view the same stream
|
||||
- **REST API control**: Start/stop streaming via API endpoints
|
||||
- **Web interface**: Ready-to-use HTML interface for live preview
|
||||
|
||||
## 🏗️ Architecture
|
||||
|
||||
The streaming system creates separate camera connections for preview that are independent from recording:
|
||||
|
||||
```
|
||||
Camera Hardware
|
||||
├── Recording Connection (CameraRecorder)
|
||||
│ ├── Used for video file recording
|
||||
│ ├── Triggered by MQTT machine states
|
||||
│ └── High quality, full FPS
|
||||
└── Streaming Connection (CameraStreamer)
|
||||
├── Used for live preview
|
||||
├── Controlled via API endpoints
|
||||
└── Optimized for web viewing (lower FPS, JPEG compression)
|
||||
```
|
||||
|
||||
## 🚀 Quick Start
|
||||
|
||||
### 1. Start the System
|
||||
```bash
|
||||
python main.py
|
||||
```
|
||||
|
||||
### 2. Open the Web Interface
|
||||
Open `camera_preview.html` in your browser and click "Start Stream" for any camera.
|
||||
|
||||
### 3. API Usage
|
||||
```bash
|
||||
# Start streaming for camera1
|
||||
curl -X POST http://localhost:8000/cameras/camera1/start-stream
|
||||
|
||||
# View live stream (open in browser)
|
||||
http://localhost:8000/cameras/camera1/stream
|
||||
|
||||
# Stop streaming
|
||||
curl -X POST http://localhost:8000/cameras/camera1/stop-stream
|
||||
```
|
||||
|
||||
## 📡 API Endpoints
|
||||
|
||||
### Start Streaming
|
||||
```http
|
||||
POST /cameras/{camera_name}/start-stream
|
||||
```
|
||||
**Response:**
|
||||
```json
|
||||
{
|
||||
"success": true,
|
||||
"message": "Started streaming for camera camera1"
|
||||
}
|
||||
```
|
||||
|
||||
### Stop Streaming
|
||||
```http
|
||||
POST /cameras/{camera_name}/stop-stream
|
||||
```
|
||||
**Response:**
|
||||
```json
|
||||
{
|
||||
"success": true,
|
||||
"message": "Stopped streaming for camera camera1"
|
||||
}
|
||||
```
|
||||
|
||||
### Live Stream (MJPEG)
|
||||
```http
|
||||
GET /cameras/{camera_name}/stream
|
||||
```
|
||||
**Response:** Multipart MJPEG stream
|
||||
**Content-Type:** `multipart/x-mixed-replace; boundary=frame`
|
||||
|
||||
## 🌐 Web Interface Usage
|
||||
|
||||
The included `camera_preview.html` provides a complete web interface:
|
||||
|
||||
1. **Camera Grid**: Shows all configured cameras
|
||||
2. **Stream Controls**: Start/Stop/Refresh buttons for each camera
|
||||
3. **Live Preview**: Real-time video feed display
|
||||
4. **Status Information**: System and camera status
|
||||
5. **Responsive Design**: Works on desktop and mobile
|
||||
|
||||
### Features:
|
||||
- ✅ Real-time camera status
|
||||
- ✅ One-click stream start/stop
|
||||
- ✅ Automatic stream refresh
|
||||
- ✅ System health monitoring
|
||||
- ✅ Error handling and status messages
|
||||
|
||||
## 🔧 Technical Details
|
||||
|
||||
### Camera Streamer Configuration
|
||||
- **Preview FPS**: 10 FPS (configurable)
|
||||
- **JPEG Quality**: 70% (configurable)
|
||||
- **Frame Buffer**: 5 frames (prevents memory buildup)
|
||||
- **Timeout**: 200ms per frame capture
|
||||
|
||||
### Memory Management
|
||||
- Automatic frame buffer cleanup
|
||||
- Queue-based frame management
|
||||
- Proper camera resource cleanup on stop
|
||||
|
||||
### Thread Safety
|
||||
- Thread-safe streaming operations
|
||||
- Independent from recording threads
|
||||
- Proper synchronization with locks
|
||||
|
||||
## 🧪 Testing
|
||||
|
||||
### Run the Test Script
|
||||
```bash
|
||||
python test_streaming.py
|
||||
```
|
||||
|
||||
This will test:
|
||||
- ✅ API endpoint functionality
|
||||
- ✅ Stream start/stop operations
|
||||
- ✅ Concurrent recording and streaming
|
||||
- ✅ Error handling
|
||||
|
||||
### Manual Testing
|
||||
1. Start the system: `python main.py`
|
||||
2. Open `camera_preview.html` in browser
|
||||
3. Start streaming for a camera
|
||||
4. Trigger recording via MQTT or manual API
|
||||
5. Verify both work simultaneously
|
||||
|
||||
## 🔄 Concurrent Operations
|
||||
|
||||
The system supports these concurrent operations:
|
||||
|
||||
| Operation | Recording | Streaming | Notes |
|
||||
|-----------|-----------|-----------|-------|
|
||||
| Recording Only | ✅ | ❌ | Normal operation |
|
||||
| Streaming Only | ❌ | ✅ | Preview without recording |
|
||||
| Both Concurrent | ✅ | ✅ | **Independent connections** |
|
||||
|
||||
### Example: Concurrent Usage
|
||||
```bash
|
||||
# Start streaming
|
||||
curl -X POST http://localhost:8000/cameras/camera1/start-stream
|
||||
|
||||
# Start recording (while streaming continues)
|
||||
curl -X POST http://localhost:8000/cameras/camera1/start-recording \
|
||||
-H "Content-Type: application/json" \
|
||||
-d '{"filename": "test_recording.avi"}'
|
||||
|
||||
# Both operations run independently!
|
||||
```
|
||||
|
||||
## 🛠️ Configuration
|
||||
|
||||
### Stream Settings (in CameraStreamer)
|
||||
```python
|
||||
self.preview_fps = 10.0 # Lower FPS for preview
|
||||
self.preview_quality = 70 # JPEG quality (1-100)
|
||||
self._frame_queue.maxsize = 5 # Frame buffer size
|
||||
```
|
||||
|
||||
### Camera Settings
|
||||
The streamer uses the same camera configuration as recording:
|
||||
- Exposure time from `camera_config.exposure_ms`
|
||||
- Gain from `camera_config.gain`
|
||||
- Optimized trigger mode for continuous streaming
|
||||
|
||||
## 🚨 Important Notes
|
||||
|
||||
### Camera Access Patterns
|
||||
- **Recording**: Blocks camera during active recording
|
||||
- **Streaming**: Uses separate connection, doesn't block
|
||||
- **Health Checks**: Brief, non-blocking camera tests
|
||||
- **Multiple Streams**: Multiple browsers can view same stream
|
||||
|
||||
### Performance Considerations
|
||||
- Streaming uses additional CPU/memory resources
|
||||
- Lower preview FPS reduces system load
|
||||
- JPEG compression reduces bandwidth usage
|
||||
- Frame queue prevents memory buildup
|
||||
|
||||
### Error Handling
|
||||
- Automatic camera resource cleanup
|
||||
- Graceful handling of camera disconnections
|
||||
- Stream auto-restart capabilities
|
||||
- Detailed error logging
|
||||
|
||||
## 🔍 Troubleshooting
|
||||
|
||||
### Stream Not Starting
|
||||
1. Check camera availability: `GET /cameras`
|
||||
2. Verify camera not in error state
|
||||
3. Check system logs for camera initialization errors
|
||||
4. Try camera reconnection: `POST /cameras/{name}/reconnect`
|
||||
|
||||
### Poor Stream Quality
|
||||
1. Adjust `preview_quality` setting (higher = better quality)
|
||||
2. Increase `preview_fps` for smoother video
|
||||
3. Check network bandwidth
|
||||
4. Verify camera exposure/gain settings
|
||||
|
||||
### Browser Issues
|
||||
1. Try different browser (Chrome/Firefox recommended)
|
||||
2. Check browser console for JavaScript errors
|
||||
3. Verify CORS settings in API server
|
||||
4. Clear browser cache and refresh
|
||||
|
||||
## 📈 Future Enhancements
|
||||
|
||||
Potential improvements for the streaming system:
|
||||
|
||||
- 🔄 WebRTC support for lower latency
|
||||
- 📱 Mobile app integration
|
||||
- 🎛️ Real-time camera setting adjustments
|
||||
- 📊 Stream analytics and monitoring
|
||||
- 🔐 Authentication and access control
|
||||
- 🌐 Multi-camera synchronized viewing
|
||||
|
||||
## 📞 Support
|
||||
|
||||
For issues with streaming functionality:
|
||||
|
||||
1. Check the system logs: `usda_vision_system.log`
|
||||
2. Run the test script: `python test_streaming.py`
|
||||
3. Verify API health: `http://localhost:8000/health`
|
||||
4. Check camera status: `http://localhost:8000/cameras`
|
||||
|
||||
---
|
||||
|
||||
**✅ Live streaming is now ready for production use!**
|
||||
146
API Documentations/docs/legacy/01README.md
Normal file
146
API Documentations/docs/legacy/01README.md
Normal file
@@ -0,0 +1,146 @@
|
||||
# GigE Camera Image Capture
|
||||
|
||||
This project provides simple Python scripts to connect to a GigE camera and capture images using the provided SDK.
|
||||
|
||||
## Files Overview
|
||||
|
||||
### Demo Files (provided with camera)
|
||||
- `python demo/mvsdk.py` - Main SDK wrapper library
|
||||
- `python demo/grab.py` - Basic image capture example
|
||||
- `python demo/cv_grab.py` - OpenCV-based continuous capture
|
||||
- `python demo/cv_grab_callback.py` - Callback-based capture
|
||||
- `python demo/readme.txt` - Original demo documentation
|
||||
|
||||
### Custom Scripts
|
||||
- `camera_capture.py` - Standalone script to capture 10 images with 200ms intervals
|
||||
- `test.ipynb` - Jupyter notebook with the same functionality
|
||||
- `images/` - Directory where captured images are saved
|
||||
|
||||
## Features
|
||||
|
||||
- **Automatic camera detection** - Finds and connects to available GigE cameras
|
||||
- **Configurable capture** - Currently set to capture 10 images with 200ms intervals
|
||||
- **Both mono and color support** - Automatically detects camera type
|
||||
- **Timestamped filenames** - Images saved with date/time stamps
|
||||
- **Error handling** - Robust error handling for camera operations
|
||||
- **Cross-platform** - Works on Windows and Linux (with appropriate image flipping)
|
||||
|
||||
## Requirements
|
||||
|
||||
- Python 3.x
|
||||
- OpenCV (`cv2`)
|
||||
- NumPy
|
||||
- Matplotlib (for Jupyter notebook display)
|
||||
- GigE camera SDK (MVSDK) - included in `python demo/` directory
|
||||
|
||||
## Usage
|
||||
|
||||
### Option 1: Standalone Script
|
||||
|
||||
Run the standalone Python script:
|
||||
|
||||
```bash
|
||||
python camera_capture.py
|
||||
```
|
||||
|
||||
This will:
|
||||
1. Initialize the camera SDK
|
||||
2. Detect available cameras
|
||||
3. Connect to the first camera found
|
||||
4. Configure camera settings (manual exposure, continuous mode)
|
||||
5. Capture 10 images with 200ms intervals
|
||||
6. Save images to the `images/` directory
|
||||
7. Clean up and close the camera
|
||||
|
||||
### Option 2: Jupyter Notebook
|
||||
|
||||
Open and run the `test.ipynb` notebook:
|
||||
|
||||
```bash
|
||||
jupyter notebook test.ipynb
|
||||
```
|
||||
|
||||
The notebook provides the same functionality but with:
|
||||
- Step-by-step execution
|
||||
- Detailed explanations
|
||||
- Visual display of the last captured image
|
||||
- Better error reporting
|
||||
|
||||
## Camera Configuration
|
||||
|
||||
The scripts are configured with the following default settings:
|
||||
|
||||
- **Trigger Mode**: Continuous capture (mode 0)
|
||||
- **Exposure**: Manual, 30ms
|
||||
- **Output Format**:
|
||||
- Monochrome cameras: MONO8
|
||||
- Color cameras: BGR8
|
||||
- **Image Processing**: Automatic ISP processing from RAW to RGB/MONO
|
||||
|
||||
## Output
|
||||
|
||||
Images are saved in the `images/` directory with the following naming convention:
|
||||
```
|
||||
image_XX_YYYYMMDD_HHMMSS_mmm.jpg
|
||||
```
|
||||
|
||||
Where:
|
||||
- `XX` = Image number (01-10)
|
||||
- `YYYYMMDD_HHMMSS_mmm` = Timestamp with milliseconds
|
||||
|
||||
Example: `image_01_20250722_140530_123.jpg`
|
||||
|
||||
## Troubleshooting
|
||||
|
||||
### Common Issues
|
||||
|
||||
1. **"No camera was found!"**
|
||||
- Check camera connection (Ethernet cable)
|
||||
- Verify camera power
|
||||
- Check network settings (camera and PC should be on same subnet)
|
||||
- Ensure camera drivers are installed
|
||||
|
||||
2. **"CameraInit Failed"**
|
||||
- Camera might be in use by another application
|
||||
- Check camera permissions
|
||||
- Try restarting the camera or PC
|
||||
|
||||
3. **"Failed to capture image"**
|
||||
- Check camera settings
|
||||
- Verify sufficient lighting
|
||||
- Check exposure settings
|
||||
|
||||
4. **Images appear upside down**
|
||||
- This is handled automatically on Windows
|
||||
- Linux users may need to adjust the flip settings
|
||||
|
||||
### Network Configuration
|
||||
|
||||
For GigE cameras, ensure:
|
||||
- Camera and PC are on the same network segment
|
||||
- PC network adapter supports Jumbo frames (recommended)
|
||||
- Firewall allows camera communication
|
||||
- Sufficient network bandwidth
|
||||
|
||||
## Customization
|
||||
|
||||
You can modify the scripts to:
|
||||
|
||||
- **Change capture count**: Modify the range in the capture loop
|
||||
- **Adjust timing**: Change the `time.sleep(0.2)` value
|
||||
- **Modify exposure**: Change the exposure time parameter
|
||||
- **Change output format**: Modify file format and quality settings
|
||||
- **Add image processing**: Insert processing steps before saving
|
||||
|
||||
## SDK Reference
|
||||
|
||||
The camera SDK (`mvsdk.py`) provides extensive functionality:
|
||||
|
||||
- Camera enumeration and initialization
|
||||
- Image capture and processing
|
||||
- Parameter configuration (exposure, gain, etc.)
|
||||
- Trigger modes and timing
|
||||
- Image format conversion
|
||||
- Error handling
|
||||
|
||||
Refer to the original SDK documentation for advanced features.
|
||||
184
API Documentations/docs/legacy/IMPLEMENTATION_SUMMARY.md
Normal file
184
API Documentations/docs/legacy/IMPLEMENTATION_SUMMARY.md
Normal file
@@ -0,0 +1,184 @@
|
||||
# USDA Vision Camera System - Implementation Summary
|
||||
|
||||
## 🎉 Project Completed Successfully!
|
||||
|
||||
The USDA Vision Camera System has been fully implemented and tested. All components are working correctly and the system is ready for deployment.
|
||||
|
||||
## ✅ What Was Built
|
||||
|
||||
### Core Architecture
|
||||
- **Modular Design**: Clean separation of concerns across multiple modules
|
||||
- **Multi-threading**: Concurrent MQTT listening, camera monitoring, and recording
|
||||
- **Event-driven**: Thread-safe communication between components
|
||||
- **Configuration-driven**: JSON-based configuration system
|
||||
|
||||
### Key Components
|
||||
|
||||
1. **MQTT Integration** (`usda_vision_system/mqtt/`)
|
||||
- Listens to two machine topics: `vision/vibratory_conveyor/state` and `vision/blower_separator/state`
|
||||
- Thread-safe message handling with automatic reconnection
|
||||
- State normalization (on/off/error)
|
||||
|
||||
2. **Camera Management** (`usda_vision_system/camera/`)
|
||||
- Automatic GigE camera discovery using python demo library
|
||||
- Periodic status monitoring (every 2 seconds)
|
||||
- Camera initialization and configuration management
|
||||
- **Discovered Cameras**:
|
||||
- Blower-Yield-Cam (192.168.1.165)
|
||||
- Cracker-Cam (192.168.1.167)
|
||||
|
||||
3. **Video Recording** (`usda_vision_system/camera/recorder.py`)
|
||||
- Automatic recording start/stop based on machine states
|
||||
- Timestamp-based file naming: `camera1_recording_20250726_143022.avi`
|
||||
- Configurable FPS, exposure, and gain settings
|
||||
- Thread-safe recording with proper cleanup
|
||||
|
||||
4. **Storage Management** (`usda_vision_system/storage/`)
|
||||
- Organized file storage under `./storage/camera1/` and `./storage/camera2/`
|
||||
- File indexing and metadata tracking
|
||||
- Automatic cleanup of old files
|
||||
- Storage statistics and integrity checking
|
||||
|
||||
5. **REST API Server** (`usda_vision_system/api/`)
|
||||
- FastAPI server on port 8000
|
||||
- Real-time WebSocket updates
|
||||
- Manual recording control endpoints
|
||||
- System status and monitoring endpoints
|
||||
|
||||
6. **Comprehensive Logging** (`usda_vision_system/core/logging_config.py`)
|
||||
- Colored console output
|
||||
- Rotating log files
|
||||
- Component-specific log levels
|
||||
- Performance monitoring and error tracking
|
||||
|
||||
## 🚀 How to Use
|
||||
|
||||
### Quick Start
|
||||
```bash
|
||||
# Run system tests
|
||||
python test_system.py
|
||||
|
||||
# Start the system
|
||||
python main.py
|
||||
|
||||
# Or use the startup script
|
||||
./start_system.sh
|
||||
```
|
||||
|
||||
### Configuration
|
||||
Edit `config.json` to customize:
|
||||
- MQTT broker settings
|
||||
- Camera configurations
|
||||
- Storage paths
|
||||
- System parameters
|
||||
|
||||
### API Access
|
||||
- System status: `http://localhost:8000/system/status`
|
||||
- Camera status: `http://localhost:8000/cameras`
|
||||
- Manual recording: `POST http://localhost:8000/cameras/camera1/start-recording`
|
||||
- Real-time updates: WebSocket at `ws://localhost:8000/ws`
|
||||
|
||||
## 📊 Test Results
|
||||
|
||||
All system tests passed successfully:
|
||||
- ✅ Module imports
|
||||
- ✅ Configuration loading
|
||||
- ✅ Camera discovery (found 2 cameras)
|
||||
- ✅ Storage setup
|
||||
- ✅ MQTT configuration
|
||||
- ✅ System initialization
|
||||
- ✅ API endpoints
|
||||
|
||||
## 🔧 System Behavior
|
||||
|
||||
### Automatic Recording Flow
|
||||
1. **Machine turns ON** → MQTT message received → Recording starts automatically
|
||||
2. **Machine turns OFF** → MQTT message received → Recording stops and saves file
|
||||
3. **Files saved** with timestamp: `camera1_recording_YYYYMMDD_HHMMSS.avi`
|
||||
|
||||
### Manual Control
|
||||
- Start/stop recording via API calls
|
||||
- Monitor system status in real-time
|
||||
- Check camera availability on demand
|
||||
|
||||
### Dashboard Integration
|
||||
The system is designed to integrate with your React + Vite + Tailwind + Supabase dashboard:
|
||||
- REST API for status queries
|
||||
- WebSocket for real-time updates
|
||||
- JSON responses for easy frontend consumption
|
||||
|
||||
## 📁 Project Structure
|
||||
|
||||
```
|
||||
usda_vision_system/
|
||||
├── core/ # Configuration, state management, events, logging
|
||||
├── mqtt/ # MQTT client and message handlers
|
||||
├── camera/ # Camera management, monitoring, recording
|
||||
├── storage/ # File organization and management
|
||||
├── api/ # FastAPI server and WebSocket support
|
||||
└── main.py # Application coordinator
|
||||
|
||||
Supporting Files:
|
||||
├── main.py # Entry point script
|
||||
├── config.json # System configuration
|
||||
├── test_system.py # Test suite
|
||||
├── start_system.sh # Startup script
|
||||
└── README_SYSTEM.md # Comprehensive documentation
|
||||
```
|
||||
|
||||
## 🎯 Key Features Delivered
|
||||
|
||||
- ✅ **Dual MQTT topic listening** for two machines
|
||||
- ✅ **Automatic camera recording** triggered by machine states
|
||||
- ✅ **GigE camera support** using python demo library
|
||||
- ✅ **Thread-safe multi-tasking** (MQTT + camera monitoring + recording)
|
||||
- ✅ **Timestamp-based file naming** in organized directories
|
||||
- ✅ **2-second camera status monitoring** with on-demand checks
|
||||
- ✅ **REST API and WebSocket** for dashboard integration
|
||||
- ✅ **Comprehensive logging** with error tracking
|
||||
- ✅ **Configuration management** via JSON
|
||||
- ✅ **Storage management** with cleanup capabilities
|
||||
- ✅ **Graceful startup/shutdown** with signal handling
|
||||
|
||||
## 🔮 Ready for Dashboard Integration
|
||||
|
||||
The system provides everything needed for your React dashboard:
|
||||
|
||||
```javascript
|
||||
// Example API usage
|
||||
const systemStatus = await fetch('http://localhost:8000/system/status');
|
||||
const cameras = await fetch('http://localhost:8000/cameras');
|
||||
|
||||
// WebSocket for real-time updates
|
||||
const ws = new WebSocket('ws://localhost:8000/ws');
|
||||
ws.onmessage = (event) => {
|
||||
const update = JSON.parse(event.data);
|
||||
// Handle real-time system updates
|
||||
};
|
||||
|
||||
// Manual recording control
|
||||
await fetch('http://localhost:8000/cameras/camera1/start-recording', {
|
||||
method: 'POST',
|
||||
headers: { 'Content-Type': 'application/json' },
|
||||
body: JSON.stringify({ camera_name: 'camera1' })
|
||||
});
|
||||
```
|
||||
|
||||
## 🎊 Next Steps
|
||||
|
||||
The system is production-ready! You can now:
|
||||
|
||||
1. **Deploy** the system on your target hardware
|
||||
2. **Integrate** with your existing React dashboard
|
||||
3. **Configure** MQTT topics and camera settings as needed
|
||||
4. **Monitor** system performance through logs and API endpoints
|
||||
5. **Extend** functionality as requirements evolve
|
||||
|
||||
The modular architecture makes it easy to add new features, cameras, or MQTT topics in the future.
|
||||
|
||||
---
|
||||
|
||||
**System Status**: ✅ **FULLY OPERATIONAL**
|
||||
**Test Results**: ✅ **ALL TESTS PASSING**
|
||||
**Cameras Detected**: ✅ **2 GIGE CAMERAS READY**
|
||||
**Ready for Production**: ✅ **YES**
|
||||
1
API Documentations/docs/legacy/README.md
Normal file
1
API Documentations/docs/legacy/README.md
Normal file
@@ -0,0 +1 @@
|
||||
# USDA-Vision-Cameras
|
||||
249
API Documentations/docs/legacy/README_SYSTEM.md
Normal file
249
API Documentations/docs/legacy/README_SYSTEM.md
Normal file
@@ -0,0 +1,249 @@
|
||||
# USDA Vision Camera System
|
||||
|
||||
A comprehensive system for monitoring machines via MQTT and automatically recording video from GigE cameras when machines are active.
|
||||
|
||||
## Overview
|
||||
|
||||
This system integrates MQTT machine monitoring with automated video recording from GigE cameras. When a machine turns on (detected via MQTT), the system automatically starts recording from the associated camera. When the machine turns off, recording stops and the video is saved with a timestamp.
|
||||
|
||||
## Features
|
||||
|
||||
- **MQTT Integration**: Listens to multiple machine state topics
|
||||
- **Automatic Recording**: Starts/stops recording based on machine states
|
||||
- **GigE Camera Support**: Uses the python demo library (mvsdk) for camera control
|
||||
- **Multi-threading**: Concurrent MQTT listening, camera monitoring, and recording
|
||||
- **REST API**: FastAPI server for dashboard integration
|
||||
- **WebSocket Support**: Real-time status updates
|
||||
- **Storage Management**: Organized file storage with cleanup capabilities
|
||||
- **Comprehensive Logging**: Detailed logging with rotation and error tracking
|
||||
- **Configuration Management**: JSON-based configuration system
|
||||
|
||||
## Architecture
|
||||
|
||||
```
|
||||
┌─────────────────┐ ┌─────────────────┐ ┌─────────────────┐
|
||||
│ MQTT Broker │ │ GigE Camera │ │ Dashboard │
|
||||
│ │ │ │ │ (React) │
|
||||
└─────────┬───────┘ └─────────┬───────┘ └─────────┬───────┘
|
||||
│ │ │
|
||||
│ Machine States │ Video Streams │ API Calls
|
||||
│ │ │
|
||||
┌─────────▼──────────────────────▼──────────────────────▼───────┐
|
||||
│ USDA Vision Camera System │
|
||||
├───────────────────────────────────────────────────────────────┤
|
||||
│ ┌─────────────┐ ┌─────────────┐ ┌─────────────┐ │
|
||||
│ │ MQTT Client │ │ Camera │ │ API Server │ │
|
||||
│ │ │ │ Manager │ │ │ │
|
||||
│ └─────────────┘ └─────────────┘ └─────────────┘ │
|
||||
│ ┌─────────────┐ ┌─────────────┐ ┌─────────────┐ │
|
||||
│ │ State │ │ Storage │ │ Event │ │
|
||||
│ │ Manager │ │ Manager │ │ System │ │
|
||||
│ └─────────────┘ └─────────────┘ └─────────────┘ │
|
||||
└───────────────────────────────────────────────────────────────┘
|
||||
```
|
||||
|
||||
## Installation
|
||||
|
||||
1. **Prerequisites**:
|
||||
- Python 3.11+
|
||||
- GigE cameras with python demo library
|
||||
- MQTT broker (e.g., Mosquitto)
|
||||
- uv package manager (recommended)
|
||||
|
||||
2. **Install Dependencies**:
|
||||
```bash
|
||||
uv sync
|
||||
```
|
||||
|
||||
3. **Setup Storage Directory**:
|
||||
```bash
|
||||
sudo mkdir -p /storage
|
||||
sudo chown $USER:$USER /storage
|
||||
```
|
||||
|
||||
## Configuration
|
||||
|
||||
Edit `config.json` to configure your system:
|
||||
|
||||
```json
|
||||
{
|
||||
"mqtt": {
|
||||
"broker_host": "192.168.1.110",
|
||||
"broker_port": 1883,
|
||||
"topics": {
|
||||
"vibratory_conveyor": "vision/vibratory_conveyor/state",
|
||||
"blower_separator": "vision/blower_separator/state"
|
||||
}
|
||||
},
|
||||
"cameras": [
|
||||
{
|
||||
"name": "camera1",
|
||||
"machine_topic": "vibratory_conveyor",
|
||||
"storage_path": "/storage/camera1",
|
||||
"exposure_ms": 1.0,
|
||||
"gain": 3.5,
|
||||
"target_fps": 3.0,
|
||||
"enabled": true
|
||||
}
|
||||
]
|
||||
}
|
||||
```
|
||||
|
||||
## Usage
|
||||
|
||||
### Basic Usage
|
||||
|
||||
1. **Start the System**:
|
||||
```bash
|
||||
python main.py
|
||||
```
|
||||
|
||||
2. **With Custom Config**:
|
||||
```bash
|
||||
python main.py --config my_config.json
|
||||
```
|
||||
|
||||
3. **Debug Mode**:
|
||||
```bash
|
||||
python main.py --log-level DEBUG
|
||||
```
|
||||
|
||||
### API Endpoints
|
||||
|
||||
The system provides a REST API on port 8000:
|
||||
|
||||
- `GET /system/status` - Overall system status
|
||||
- `GET /cameras` - All camera statuses
|
||||
- `GET /machines` - All machine states
|
||||
- `POST /cameras/{name}/start-recording` - Manual recording start
|
||||
- `POST /cameras/{name}/stop-recording` - Manual recording stop
|
||||
- `GET /storage/stats` - Storage statistics
|
||||
- `WebSocket /ws` - Real-time updates
|
||||
|
||||
### Dashboard Integration
|
||||
|
||||
The system is designed to integrate with your existing React + Vite + Tailwind + Supabase dashboard:
|
||||
|
||||
1. **API Integration**: Use the REST endpoints to display system status
|
||||
2. **WebSocket**: Connect to `/ws` for real-time updates
|
||||
3. **Supabase Storage**: Store recording metadata and system logs
|
||||
|
||||
## File Organization
|
||||
|
||||
```
|
||||
/storage/
|
||||
├── camera1/
|
||||
│ ├── camera1_recording_20250726_143022.avi
|
||||
│ └── camera1_recording_20250726_143155.avi
|
||||
├── camera2/
|
||||
│ ├── camera2_recording_20250726_143025.avi
|
||||
│ └── camera2_recording_20250726_143158.avi
|
||||
└── file_index.json
|
||||
```
|
||||
|
||||
## Monitoring and Logging
|
||||
|
||||
### Log Files
|
||||
|
||||
- `usda_vision_system.log` - Main system log (rotated)
|
||||
- Console output with colored formatting
|
||||
- Component-specific log levels
|
||||
|
||||
### Performance Monitoring
|
||||
|
||||
The system includes built-in performance monitoring:
|
||||
- Startup times
|
||||
- Recording session metrics
|
||||
- MQTT message processing rates
|
||||
- Camera status check intervals
|
||||
|
||||
### Error Tracking
|
||||
|
||||
Comprehensive error tracking with:
|
||||
- Error counts per component
|
||||
- Detailed error context
|
||||
- Automatic recovery attempts
|
||||
|
||||
## Troubleshooting
|
||||
|
||||
### Common Issues
|
||||
|
||||
1. **Camera Not Found**:
|
||||
- Check camera connections
|
||||
- Verify python demo library installation
|
||||
- Run camera discovery: Check logs for enumeration results
|
||||
|
||||
2. **MQTT Connection Failed**:
|
||||
- Verify broker IP and port
|
||||
- Check network connectivity
|
||||
- Verify credentials if authentication is enabled
|
||||
|
||||
3. **Recording Fails**:
|
||||
- Check storage permissions
|
||||
- Verify available disk space
|
||||
- Check camera initialization logs
|
||||
|
||||
4. **API Server Won't Start**:
|
||||
- Check if port 8000 is available
|
||||
- Verify FastAPI dependencies
|
||||
- Check firewall settings
|
||||
|
||||
### Debug Commands
|
||||
|
||||
```bash
|
||||
# Check system status
|
||||
curl http://localhost:8000/system/status
|
||||
|
||||
# Check camera status
|
||||
curl http://localhost:8000/cameras
|
||||
|
||||
# Manual recording start
|
||||
curl -X POST http://localhost:8000/cameras/camera1/start-recording \
|
||||
-H "Content-Type: application/json" \
|
||||
-d '{"camera_name": "camera1"}'
|
||||
```
|
||||
|
||||
## Development
|
||||
|
||||
### Project Structure
|
||||
|
||||
```
|
||||
usda_vision_system/
|
||||
├── core/ # Core functionality
|
||||
├── mqtt/ # MQTT client and handlers
|
||||
├── camera/ # Camera management and recording
|
||||
├── storage/ # File management
|
||||
├── api/ # FastAPI server
|
||||
└── main.py # Application coordinator
|
||||
```
|
||||
|
||||
### Adding New Features
|
||||
|
||||
1. **New Camera Type**: Extend `camera/recorder.py`
|
||||
2. **New MQTT Topics**: Update `config.json` and `mqtt/handlers.py`
|
||||
3. **New API Endpoints**: Add to `api/server.py`
|
||||
4. **New Events**: Define in `core/events.py`
|
||||
|
||||
### Testing
|
||||
|
||||
```bash
|
||||
# Run basic system test
|
||||
python -c "from usda_vision_system import USDAVisionSystem; s = USDAVisionSystem(); print('OK')"
|
||||
|
||||
# Test MQTT connection
|
||||
python -c "from usda_vision_system.mqtt.client import MQTTClient; # ... test code"
|
||||
|
||||
# Test camera discovery
|
||||
python -c "import sys; sys.path.append('python demo'); import mvsdk; print(len(mvsdk.CameraEnumerateDevice()))"
|
||||
```
|
||||
|
||||
## License
|
||||
|
||||
This project is developed for USDA research purposes.
|
||||
|
||||
## Support
|
||||
|
||||
For issues and questions:
|
||||
1. Check the logs in `usda_vision_system.log`
|
||||
2. Review the troubleshooting section
|
||||
3. Check API status at `http://localhost:8000/health`
|
||||
190
API Documentations/docs/legacy/TIMEZONE_SETUP_SUMMARY.md
Normal file
190
API Documentations/docs/legacy/TIMEZONE_SETUP_SUMMARY.md
Normal file
@@ -0,0 +1,190 @@
|
||||
# Time Synchronization Setup - Atlanta, Georgia
|
||||
|
||||
## ✅ Time Synchronization Complete!
|
||||
|
||||
The USDA Vision Camera System has been configured for proper time synchronization with Atlanta, Georgia (Eastern Time Zone).
|
||||
|
||||
## 🕐 What Was Implemented
|
||||
|
||||
### System-Level Time Configuration
|
||||
- **Timezone**: Set to `America/New_York` (Eastern Time)
|
||||
- **Current Status**: Eastern Daylight Time (EDT, UTC-4)
|
||||
- **NTP Sync**: Configured with multiple reliable time servers
|
||||
- **Hardware Clock**: Synchronized with system time
|
||||
|
||||
### Application-Level Timezone Support
|
||||
- **Timezone-Aware Timestamps**: All recordings use Atlanta time
|
||||
- **Automatic DST Handling**: Switches between EST/EDT automatically
|
||||
- **Time Sync Monitoring**: Built-in time synchronization checking
|
||||
- **Consistent Formatting**: Standardized timestamp formats throughout
|
||||
|
||||
## 🔧 Key Features
|
||||
|
||||
### 1. Automatic Time Synchronization
|
||||
```bash
|
||||
# NTP servers configured:
|
||||
- time.nist.gov (NIST atomic clock)
|
||||
- pool.ntp.org (NTP pool)
|
||||
- time.google.com (Google time)
|
||||
- time.cloudflare.com (Cloudflare time)
|
||||
```
|
||||
|
||||
### 2. Timezone-Aware Recording Filenames
|
||||
```
|
||||
Example: camera1_recording_20250725_213241.avi
|
||||
Format: {camera}_{type}_{YYYYMMDD_HHMMSS}.avi
|
||||
Time: Atlanta local time (EDT/EST)
|
||||
```
|
||||
|
||||
### 3. Time Verification Tools
|
||||
- **Startup Check**: Automatic time sync verification on system start
|
||||
- **Manual Check**: `python check_time.py` for on-demand verification
|
||||
- **API Integration**: Time sync status available via REST API
|
||||
|
||||
### 4. Comprehensive Logging
|
||||
```
|
||||
=== TIME SYNCHRONIZATION STATUS ===
|
||||
System time: 2025-07-25 21:32:41 EDT
|
||||
Timezone: EDT (-0400)
|
||||
Daylight Saving: Yes
|
||||
Sync status: synchronized
|
||||
Time difference: 0.10 seconds
|
||||
=====================================
|
||||
```
|
||||
|
||||
## 🚀 Usage
|
||||
|
||||
### Automatic Operation
|
||||
The system automatically:
|
||||
- Uses Atlanta time for all timestamps
|
||||
- Handles daylight saving time transitions
|
||||
- Monitors time synchronization status
|
||||
- Logs time-related events
|
||||
|
||||
### Manual Verification
|
||||
```bash
|
||||
# Check time synchronization
|
||||
python check_time.py
|
||||
|
||||
# Test timezone functions
|
||||
python test_timezone.py
|
||||
|
||||
# View system time status
|
||||
timedatectl status
|
||||
```
|
||||
|
||||
### API Endpoints
|
||||
```bash
|
||||
# System status includes time info
|
||||
curl http://localhost:8000/system/status
|
||||
|
||||
# Example response includes:
|
||||
{
|
||||
"system_started": true,
|
||||
"uptime_seconds": 3600,
|
||||
"timestamp": "2025-07-25T21:32:41-04:00"
|
||||
}
|
||||
```
|
||||
|
||||
## 📊 Current Status
|
||||
|
||||
### Time Synchronization
|
||||
- ✅ **System Timezone**: America/New_York (EDT)
|
||||
- ✅ **NTP Sync**: Active and synchronized
|
||||
- ✅ **Time Accuracy**: Within 0.1 seconds of atomic time
|
||||
- ✅ **DST Support**: Automatic EST/EDT switching
|
||||
|
||||
### Application Integration
|
||||
- ✅ **Recording Timestamps**: Atlanta time zone
|
||||
- ✅ **Log Timestamps**: Timezone-aware logging
|
||||
- ✅ **API Responses**: ISO format with timezone
|
||||
- ✅ **File Naming**: Consistent Atlanta time format
|
||||
|
||||
### Monitoring
|
||||
- ✅ **Startup Verification**: Time sync checked on boot
|
||||
- ✅ **Continuous Monitoring**: Built-in sync status tracking
|
||||
- ✅ **Error Detection**: Alerts for time drift issues
|
||||
- ✅ **Manual Tools**: On-demand verification scripts
|
||||
|
||||
## 🔍 Technical Details
|
||||
|
||||
### Timezone Configuration
|
||||
```json
|
||||
{
|
||||
"system": {
|
||||
"timezone": "America/New_York"
|
||||
}
|
||||
}
|
||||
```
|
||||
|
||||
### Time Sources
|
||||
1. **Primary**: NIST atomic clock (time.nist.gov)
|
||||
2. **Secondary**: NTP pool servers (pool.ntp.org)
|
||||
3. **Backup**: Google/Cloudflare time servers
|
||||
4. **Fallback**: Local system clock
|
||||
|
||||
### File Naming Convention
|
||||
```
|
||||
Pattern: {camera_name}_recording_{YYYYMMDD_HHMMSS}.avi
|
||||
Example: camera1_recording_20250725_213241.avi
|
||||
Timezone: Always Atlanta local time (EST/EDT)
|
||||
```
|
||||
|
||||
## 🎯 Benefits
|
||||
|
||||
### For Operations
|
||||
- **Consistent Timestamps**: All recordings use Atlanta time
|
||||
- **Easy Correlation**: Timestamps match local business hours
|
||||
- **Automatic DST**: No manual timezone adjustments needed
|
||||
- **Reliable Sync**: Multiple time sources ensure accuracy
|
||||
|
||||
### For Analysis
|
||||
- **Local Time Context**: Recordings timestamped in business timezone
|
||||
- **Accurate Sequencing**: Precise timing for event correlation
|
||||
- **Standard Format**: Consistent naming across all recordings
|
||||
- **Audit Trail**: Complete time synchronization logging
|
||||
|
||||
### For Integration
|
||||
- **Dashboard Ready**: Timezone-aware API responses
|
||||
- **Database Compatible**: ISO format timestamps with timezone
|
||||
- **Log Analysis**: Structured time information in logs
|
||||
- **Monitoring**: Built-in time sync health checks
|
||||
|
||||
## 🔧 Maintenance
|
||||
|
||||
### Regular Checks
|
||||
The system automatically:
|
||||
- Verifies time sync on startup
|
||||
- Logs time synchronization status
|
||||
- Monitors for time drift
|
||||
- Alerts on sync failures
|
||||
|
||||
### Manual Maintenance
|
||||
```bash
|
||||
# Force time sync
|
||||
sudo systemctl restart systemd-timesyncd
|
||||
|
||||
# Check NTP status
|
||||
timedatectl show-timesync --all
|
||||
|
||||
# Verify timezone
|
||||
timedatectl status
|
||||
```
|
||||
|
||||
## 📈 Next Steps
|
||||
|
||||
The time synchronization is now fully operational. The system will:
|
||||
|
||||
1. **Automatically maintain** accurate Atlanta time
|
||||
2. **Generate timestamped recordings** with local time
|
||||
3. **Monitor sync status** and alert on issues
|
||||
4. **Provide timezone-aware** API responses for dashboard integration
|
||||
|
||||
All recording files will now have accurate Atlanta timestamps, making it easy to correlate with local business operations and machine schedules.
|
||||
|
||||
---
|
||||
|
||||
**Time Sync Status**: ✅ **SYNCHRONIZED**
|
||||
**Timezone**: ✅ **America/New_York (EDT)**
|
||||
**Accuracy**: ✅ **±0.1 seconds**
|
||||
**Ready for Production**: ✅ **YES**
|
||||
191
API Documentations/docs/legacy/VIDEO_RECORDER_README.md
Normal file
191
API Documentations/docs/legacy/VIDEO_RECORDER_README.md
Normal file
@@ -0,0 +1,191 @@
|
||||
# Camera Video Recorder
|
||||
|
||||
A Python script for recording videos from GigE cameras using the provided SDK with custom exposure and gain settings.
|
||||
|
||||
## Features
|
||||
|
||||
- **List all available cameras** - Automatically detects and displays all connected cameras
|
||||
- **Custom camera settings** - Set exposure time to 1ms and gain to 3.5x (or custom values)
|
||||
- **Video recording** - Record videos in AVI format with timestamp filenames
|
||||
- **Live preview** - Test camera functionality with live preview mode
|
||||
- **Interactive menu** - User-friendly menu system for all operations
|
||||
- **Automatic cleanup** - Proper resource management and cleanup
|
||||
|
||||
## Requirements
|
||||
|
||||
- Python 3.x
|
||||
- OpenCV (`cv2`)
|
||||
- NumPy
|
||||
- Camera SDK (mvsdk) - included in `python demo` directory
|
||||
- GigE camera connected to the system
|
||||
|
||||
## Installation
|
||||
|
||||
1. Ensure your GigE camera is connected and properly configured
|
||||
2. Make sure the `python demo` directory with `mvsdk.py` is present
|
||||
3. Install required Python packages:
|
||||
```bash
|
||||
pip install opencv-python numpy
|
||||
```
|
||||
|
||||
## Usage
|
||||
|
||||
### Basic Usage
|
||||
|
||||
Run the script:
|
||||
```bash
|
||||
python camera_video_recorder.py
|
||||
```
|
||||
|
||||
The script will:
|
||||
1. Display a welcome message and feature overview
|
||||
2. List all available cameras
|
||||
3. Let you select a camera (if multiple are available)
|
||||
4. Allow you to set custom exposure and gain values
|
||||
5. Present an interactive menu with options
|
||||
|
||||
### Menu Options
|
||||
|
||||
1. **Start Recording** - Begin video recording with timestamp filename
|
||||
2. **List Camera Info** - Display detailed camera information
|
||||
3. **Test Camera (Live Preview)** - View live camera feed without recording
|
||||
4. **Exit** - Clean up and exit the program
|
||||
|
||||
### Default Settings
|
||||
|
||||
- **Exposure Time**: 1.0ms (1000 microseconds)
|
||||
- **Gain**: 3.5x
|
||||
- **Video Format**: AVI with XVID codec
|
||||
- **Frame Rate**: 30 FPS
|
||||
- **Output Directory**: `videos/` (created automatically)
|
||||
|
||||
### Recording Controls
|
||||
|
||||
- **Start Recording**: Select option 1 from the menu
|
||||
- **Stop Recording**: Press 'q' in the preview window
|
||||
- **Video Files**: Saved as `videos/camera_recording_YYYYMMDD_HHMMSS.avi`
|
||||
|
||||
## File Structure
|
||||
|
||||
```
|
||||
camera_video_recorder.py # Main script
|
||||
python demo/
|
||||
mvsdk.py # Camera SDK wrapper
|
||||
(other demo files)
|
||||
videos/ # Output directory (created automatically)
|
||||
camera_recording_*.avi # Recorded video files
|
||||
```
|
||||
|
||||
## Script Features
|
||||
|
||||
### CameraVideoRecorder Class
|
||||
|
||||
- `list_cameras()` - Enumerate and display available cameras
|
||||
- `initialize_camera()` - Set up camera with custom exposure and gain
|
||||
- `start_recording()` - Initialize video writer and begin recording
|
||||
- `stop_recording()` - Stop recording and save video file
|
||||
- `record_loop()` - Main recording loop with live preview
|
||||
- `cleanup()` - Proper resource cleanup
|
||||
|
||||
### Key Functions
|
||||
|
||||
- **Camera Detection**: Automatically finds all connected GigE cameras
|
||||
- **Settings Validation**: Checks and clamps exposure/gain values to camera limits
|
||||
- **Frame Processing**: Handles both monochrome and color cameras
|
||||
- **Windows Compatibility**: Handles frame flipping for Windows systems
|
||||
- **Error Handling**: Comprehensive error handling and user feedback
|
||||
|
||||
## Example Output
|
||||
|
||||
```
|
||||
Camera Video Recorder
|
||||
====================
|
||||
This script allows you to:
|
||||
- List all available cameras
|
||||
- Record videos with custom exposure (1ms) and gain (3.5x) settings
|
||||
- Save videos with timestamps
|
||||
- Stop recording anytime with 'q' key
|
||||
|
||||
Found 1 camera(s):
|
||||
0: GigE Camera Model (GigE) - SN: 12345678
|
||||
|
||||
Using camera: GigE Camera Model
|
||||
|
||||
Camera Settings:
|
||||
Enter exposure time in ms (default 1.0): 1.0
|
||||
Enter gain value (default 3.5): 3.5
|
||||
|
||||
Initializing camera with:
|
||||
- Exposure: 1.0ms
|
||||
- Gain: 3.5x
|
||||
|
||||
Camera type: Color
|
||||
Set exposure time: 1000.0μs
|
||||
Set analog gain: 3.50x (range: 1.00 - 16.00)
|
||||
Camera started successfully
|
||||
|
||||
==================================================
|
||||
Camera Video Recorder Menu
|
||||
==================================================
|
||||
1. Start Recording
|
||||
2. List Camera Info
|
||||
3. Test Camera (Live Preview)
|
||||
4. Exit
|
||||
|
||||
Select option (1-4): 1
|
||||
|
||||
Started recording to: videos/camera_recording_20241223_143022.avi
|
||||
Frame size: (1920, 1080), FPS: 30.0
|
||||
Press 'q' to stop recording...
|
||||
Recording... Press 'q' in the preview window to stop
|
||||
|
||||
Recording stopped!
|
||||
Saved: videos/camera_recording_20241223_143022.avi
|
||||
Frames recorded: 450
|
||||
Duration: 15.2 seconds
|
||||
Average FPS: 29.6
|
||||
```
|
||||
|
||||
## Troubleshooting
|
||||
|
||||
### Common Issues
|
||||
|
||||
1. **"No cameras found!"**
|
||||
- Check camera connection
|
||||
- Verify camera power
|
||||
- Ensure network configuration for GigE cameras
|
||||
|
||||
2. **"SDK initialization failed"**
|
||||
- Verify `python demo/mvsdk.py` exists
|
||||
- Check camera drivers are installed
|
||||
|
||||
3. **"Camera initialization failed"**
|
||||
- Camera may be in use by another application
|
||||
- Try disconnecting and reconnecting the camera
|
||||
|
||||
4. **Recording issues**
|
||||
- Ensure sufficient disk space
|
||||
- Check write permissions in the output directory
|
||||
|
||||
### Performance Tips
|
||||
|
||||
- Close other applications using the camera
|
||||
- Ensure adequate system resources (CPU, RAM)
|
||||
- Use SSD storage for better write performance
|
||||
- Adjust frame rate if experiencing dropped frames
|
||||
|
||||
## Customization
|
||||
|
||||
You can modify the script to:
|
||||
- Change video codec (currently XVID)
|
||||
- Adjust target frame rate
|
||||
- Modify output filename format
|
||||
- Add additional camera settings
|
||||
- Change preview window size
|
||||
|
||||
## Notes
|
||||
|
||||
- Videos are saved in the `videos/` directory with timestamp filenames
|
||||
- The script handles both monochrome and color cameras automatically
|
||||
- Frame flipping is handled automatically for Windows systems
|
||||
- All resources are properly cleaned up on exit
|
||||
@@ -1,10 +1,16 @@
|
||||
### USDA Vision Camera Streaming API
|
||||
### Base URL: http://localhost:8000
|
||||
###
|
||||
###
|
||||
### CONFIGURATION:
|
||||
### - Production: http://vision:8000 (requires hostname setup)
|
||||
### - Development: http://localhost:8000
|
||||
### - Custom: Update @baseUrl below to match your setup
|
||||
###
|
||||
### This file contains streaming-specific API endpoints for live camera preview
|
||||
### Use with VS Code REST Client extension or similar tools.
|
||||
|
||||
@baseUrl = http://localhost:8000
|
||||
# Base URL - Update to match your configuration
|
||||
@baseUrl = http://vision:8000
|
||||
# Alternative: @baseUrl = http://localhost:8000
|
||||
|
||||
### =============================================================================
|
||||
### STREAMING ENDPOINTS (NEW FUNCTIONALITY)
|
||||
@@ -298,3 +304,221 @@ Content-Type: application/json
|
||||
# - JPEG quality set to 70% (configurable)
|
||||
# - Each stream uses additional CPU/memory
|
||||
# - Multiple concurrent streams may impact performance
|
||||
|
||||
### =============================================================================
|
||||
### CAMERA CONFIGURATION ENDPOINTS (NEW)
|
||||
### =============================================================================
|
||||
|
||||
### Get camera configuration
|
||||
GET {{baseUrl}}/cameras/camera1/config
|
||||
|
||||
### Expected Response:
|
||||
# {
|
||||
# "name": "camera1",
|
||||
# "machine_topic": "vibratory_conveyor",
|
||||
# "storage_path": "/storage/camera1",
|
||||
# "enabled": true,
|
||||
# "exposure_ms": 1.0,
|
||||
# "gain": 3.5,
|
||||
# "target_fps": 0,
|
||||
# "sharpness": 120,
|
||||
# "contrast": 110,
|
||||
# "saturation": 100,
|
||||
# "gamma": 100,
|
||||
# "noise_filter_enabled": true,
|
||||
# "denoise_3d_enabled": false,
|
||||
# "auto_white_balance": true,
|
||||
# "color_temperature_preset": 0,
|
||||
# "anti_flicker_enabled": true,
|
||||
# "light_frequency": 1,
|
||||
# "bit_depth": 8,
|
||||
# "hdr_enabled": false,
|
||||
# "hdr_gain_mode": 0
|
||||
# }
|
||||
|
||||
###
|
||||
|
||||
### Update basic camera settings (real-time, no restart required)
|
||||
PUT {{baseUrl}}/cameras/camera1/config
|
||||
Content-Type: application/json
|
||||
|
||||
{
|
||||
"exposure_ms": 2.0,
|
||||
"gain": 4.0,
|
||||
"target_fps": 10.0
|
||||
}
|
||||
|
||||
###
|
||||
|
||||
### Update image quality settings
|
||||
PUT {{baseUrl}}/cameras/camera1/config
|
||||
Content-Type: application/json
|
||||
|
||||
{
|
||||
"sharpness": 150,
|
||||
"contrast": 120,
|
||||
"saturation": 110,
|
||||
"gamma": 90
|
||||
}
|
||||
|
||||
###
|
||||
|
||||
### Update advanced settings
|
||||
PUT {{baseUrl}}/cameras/camera1/config
|
||||
Content-Type: application/json
|
||||
|
||||
{
|
||||
"anti_flicker_enabled": true,
|
||||
"light_frequency": 1,
|
||||
"auto_white_balance": false,
|
||||
"color_temperature_preset": 2
|
||||
}
|
||||
|
||||
###
|
||||
|
||||
### Enable HDR mode
|
||||
PUT {{baseUrl}}/cameras/camera1/config
|
||||
Content-Type: application/json
|
||||
|
||||
{
|
||||
"hdr_enabled": true,
|
||||
"hdr_gain_mode": 1
|
||||
}
|
||||
|
||||
###
|
||||
|
||||
### Update noise reduction settings (requires restart)
|
||||
PUT {{baseUrl}}/cameras/camera1/config
|
||||
Content-Type: application/json
|
||||
|
||||
{
|
||||
"noise_filter_enabled": false,
|
||||
"denoise_3d_enabled": true
|
||||
}
|
||||
|
||||
###
|
||||
|
||||
### Apply configuration (restart camera with new settings)
|
||||
POST {{baseUrl}}/cameras/camera1/apply-config
|
||||
|
||||
### Expected Response:
|
||||
# {
|
||||
# "success": true,
|
||||
# "message": "Configuration applied to camera camera1"
|
||||
# }
|
||||
|
||||
###
|
||||
|
||||
### Get camera2 configuration
|
||||
GET {{baseUrl}}/cameras/camera2/config
|
||||
|
||||
###
|
||||
|
||||
### Update camera2 for outdoor lighting
|
||||
PUT {{baseUrl}}/cameras/camera2/config
|
||||
Content-Type: application/json
|
||||
|
||||
{
|
||||
"exposure_ms": 0.5,
|
||||
"gain": 2.0,
|
||||
"sharpness": 130,
|
||||
"contrast": 115,
|
||||
"anti_flicker_enabled": true,
|
||||
"light_frequency": 1
|
||||
}
|
||||
|
||||
### =============================================================================
|
||||
### CONFIGURATION TESTING SCENARIOS
|
||||
### =============================================================================
|
||||
|
||||
### Scenario 1: Low light optimization
|
||||
PUT {{baseUrl}}/cameras/camera1/config
|
||||
Content-Type: application/json
|
||||
|
||||
{
|
||||
"exposure_ms": 5.0,
|
||||
"gain": 8.0,
|
||||
"noise_filter_enabled": true,
|
||||
"denoise_3d_enabled": true
|
||||
}
|
||||
|
||||
###
|
||||
|
||||
### Scenario 2: High speed capture
|
||||
PUT {{baseUrl}}/cameras/camera1/config
|
||||
Content-Type: application/json
|
||||
|
||||
{
|
||||
"exposure_ms": 0.2,
|
||||
"gain": 1.0,
|
||||
"target_fps": 30.0,
|
||||
"sharpness": 180
|
||||
}
|
||||
|
||||
###
|
||||
|
||||
### Scenario 3: Color accuracy for food inspection
|
||||
PUT {{baseUrl}}/cameras/camera1/config
|
||||
Content-Type: application/json
|
||||
|
||||
{
|
||||
"auto_white_balance": false,
|
||||
"color_temperature_preset": 1,
|
||||
"saturation": 120,
|
||||
"contrast": 105,
|
||||
"gamma": 95
|
||||
}
|
||||
|
||||
###
|
||||
|
||||
### Scenario 4: HDR for high contrast scenes
|
||||
PUT {{baseUrl}}/cameras/camera1/config
|
||||
Content-Type: application/json
|
||||
|
||||
{
|
||||
"hdr_enabled": true,
|
||||
"hdr_gain_mode": 2,
|
||||
"exposure_ms": 1.0,
|
||||
"gain": 3.0
|
||||
}
|
||||
|
||||
### =============================================================================
|
||||
### ERROR TESTING FOR CONFIGURATION
|
||||
### =============================================================================
|
||||
|
||||
### Test invalid camera name
|
||||
GET {{baseUrl}}/cameras/invalid_camera/config
|
||||
|
||||
###
|
||||
|
||||
### Test invalid exposure range
|
||||
PUT {{baseUrl}}/cameras/camera1/config
|
||||
Content-Type: application/json
|
||||
|
||||
{
|
||||
"exposure_ms": 2000.0
|
||||
}
|
||||
|
||||
### Expected: HTTP 422 validation error
|
||||
|
||||
###
|
||||
|
||||
### Test invalid gain range
|
||||
PUT {{baseUrl}}/cameras/camera1/config
|
||||
Content-Type: application/json
|
||||
|
||||
{
|
||||
"gain": 50.0
|
||||
}
|
||||
|
||||
### Expected: HTTP 422 validation error
|
||||
|
||||
###
|
||||
|
||||
### Test empty configuration update
|
||||
PUT {{baseUrl}}/cameras/camera1/config
|
||||
Content-Type: application/json
|
||||
|
||||
{}
|
||||
|
||||
### Expected: HTTP 400 "No configuration updates provided"
|
||||
80
API Documentations/test_frame_conversion.py
Normal file
80
API Documentations/test_frame_conversion.py
Normal file
@@ -0,0 +1,80 @@
|
||||
#!/usr/bin/env python3
|
||||
"""
|
||||
Test script to verify the frame conversion fix works correctly.
|
||||
"""
|
||||
|
||||
import sys
|
||||
import os
|
||||
import numpy as np
|
||||
|
||||
# Add the current directory to Python path
|
||||
sys.path.insert(0, os.path.dirname(os.path.abspath(__file__)))
|
||||
|
||||
# Add camera SDK to path
|
||||
sys.path.append(os.path.join(os.path.dirname(__file__), "camera_sdk"))
|
||||
|
||||
try:
|
||||
import mvsdk
|
||||
print("✅ mvsdk imported successfully")
|
||||
except ImportError as e:
|
||||
print(f"❌ Failed to import mvsdk: {e}")
|
||||
sys.exit(1)
|
||||
|
||||
def test_frame_conversion():
|
||||
"""Test the frame conversion logic"""
|
||||
print("🧪 Testing frame conversion logic...")
|
||||
|
||||
# Simulate frame data
|
||||
width, height = 640, 480
|
||||
frame_size = width * height * 3 # RGB
|
||||
|
||||
# Create mock frame data
|
||||
mock_frame_data = np.random.randint(0, 255, frame_size, dtype=np.uint8)
|
||||
|
||||
# Create a mock frame buffer (simulate memory address)
|
||||
frame_buffer = mock_frame_data.ctypes.data
|
||||
|
||||
# Create mock FrameHead
|
||||
class MockFrameHead:
|
||||
def __init__(self):
|
||||
self.iWidth = width
|
||||
self.iHeight = height
|
||||
self.uBytes = frame_size
|
||||
|
||||
frame_head = MockFrameHead()
|
||||
|
||||
try:
|
||||
# Test the conversion logic (similar to what's in streamer.py)
|
||||
frame_data_buffer = (mvsdk.c_ubyte * frame_head.uBytes).from_address(frame_buffer)
|
||||
frame_data = np.frombuffer(frame_data_buffer, dtype=np.uint8)
|
||||
frame = frame_data.reshape((frame_head.iHeight, frame_head.iWidth, 3))
|
||||
|
||||
print(f"✅ Frame conversion successful!")
|
||||
print(f" Frame shape: {frame.shape}")
|
||||
print(f" Frame dtype: {frame.dtype}")
|
||||
print(f" Frame size: {frame.size} bytes")
|
||||
|
||||
return True
|
||||
|
||||
except Exception as e:
|
||||
print(f"❌ Frame conversion failed: {e}")
|
||||
return False
|
||||
|
||||
def main():
|
||||
print("🔧 Frame Conversion Test")
|
||||
print("=" * 40)
|
||||
|
||||
success = test_frame_conversion()
|
||||
|
||||
if success:
|
||||
print("\n✅ Frame conversion fix is working correctly!")
|
||||
print("📋 The streaming issue should be resolved after system restart.")
|
||||
else:
|
||||
print("\n❌ Frame conversion fix needs more work.")
|
||||
|
||||
print("\n💡 To apply the fix:")
|
||||
print("1. Restart the USDA vision system")
|
||||
print("2. Test streaming again")
|
||||
|
||||
if __name__ == "__main__":
|
||||
main()
|
||||
@@ -149,7 +149,8 @@ GET http://localhost:8000/cameras
|
||||
# "serial_number": "ABC123"
|
||||
# },
|
||||
# "current_recording_file": null,
|
||||
# "recording_start_time": null
|
||||
# "recording_start_time": null,
|
||||
# "auto_record_on_machine_start": false
|
||||
# }
|
||||
# }
|
||||
|
||||
|
||||
162
docs/AUTO_RECORDING_SETUP.md
Normal file
162
docs/AUTO_RECORDING_SETUP.md
Normal file
@@ -0,0 +1,162 @@
|
||||
# 🤖 Auto-Recording Setup Guide
|
||||
|
||||
This guide explains how to set up and test the automatic recording functionality that triggers camera recording when machines turn on/off via MQTT.
|
||||
|
||||
## 📋 Overview
|
||||
|
||||
The auto-recording feature allows cameras to automatically start recording when their associated machine turns on and stop recording when the machine turns off. This is based on MQTT messages received from the machines.
|
||||
|
||||
## 🔧 Setup Steps
|
||||
|
||||
### 1. Configure Camera Auto-Recording
|
||||
|
||||
1. **Access Vision System**: Navigate to the Vision System page in the dashboard
|
||||
2. **Open Camera Configuration**: Click "Configure Camera" on any camera (admin access required)
|
||||
3. **Enable Auto-Recording**: In the "Auto-Recording" section, check the box "Automatically start recording when machine turns on"
|
||||
4. **Save Configuration**: Click "Save Changes" to apply the setting
|
||||
|
||||
### 2. Machine-Camera Mapping
|
||||
|
||||
The system uses the `machine_topic` field in camera configuration to determine which MQTT topic to monitor:
|
||||
|
||||
- **Camera 1** (`camera1`) → monitors `blower_separator`
|
||||
- **Camera 2** (`camera2`) → monitors `vibratory_conveyor`
|
||||
|
||||
### 3. Start Auto-Recording Manager
|
||||
|
||||
1. **Navigate to Vision System**: Go to the Vision System page
|
||||
2. **Find Auto-Recording Section**: Look for the "Auto-Recording" panel (admin only)
|
||||
3. **Start Monitoring**: Click the "Start" button to begin monitoring MQTT events
|
||||
4. **Monitor Status**: The panel will show the current state of all cameras and their auto-recording status
|
||||
|
||||
## 🧪 Testing the Functionality
|
||||
|
||||
### Test Scenario 1: Manual MQTT Message Simulation
|
||||
|
||||
If you have access to the MQTT broker, you can test by sending messages:
|
||||
|
||||
```bash
|
||||
# Turn on the vibratory conveyor (should start recording on camera2)
|
||||
mosquitto_pub -h 192.168.1.110 -t "vision/vibratory_conveyor/state" -m "on"
|
||||
|
||||
# Turn off the vibratory conveyor (should stop recording on camera2)
|
||||
mosquitto_pub -h 192.168.1.110 -t "vision/vibratory_conveyor/state" -m "off"
|
||||
|
||||
# Turn on the blower separator (should start recording on camera1)
|
||||
mosquitto_pub -h 192.168.1.110 -t "vision/blower_separator/state" -m "on"
|
||||
|
||||
# Turn off the blower separator (should stop recording on camera1)
|
||||
mosquitto_pub -h 192.168.1.110 -t "vision/blower_separator/state" -m "off"
|
||||
```
|
||||
|
||||
### Test Scenario 2: Physical Machine Operation
|
||||
|
||||
1. **Enable Auto-Recording**: Ensure auto-recording is enabled for the desired cameras
|
||||
2. **Start Auto-Recording Manager**: Make sure the auto-recording manager is running
|
||||
3. **Operate Machine**: Turn on the physical machine (conveyor or blower)
|
||||
4. **Verify Recording**: Check that the camera starts recording automatically
|
||||
5. **Stop Machine**: Turn off the machine
|
||||
6. **Verify Stop**: Check that recording stops automatically
|
||||
|
||||
## 📊 Monitoring and Verification
|
||||
|
||||
### Auto-Recording Status Panel
|
||||
|
||||
The Vision System page includes an "Auto-Recording" status panel that shows:
|
||||
|
||||
- **Manager Status**: Whether the auto-recording manager is active
|
||||
- **Camera States**: For each camera:
|
||||
- Machine state (ON/OFF)
|
||||
- Recording status (YES/NO)
|
||||
- Auto-record enabled status
|
||||
- Last state change timestamp
|
||||
|
||||
### MQTT Events Panel
|
||||
|
||||
Monitor the MQTT Events section to see:
|
||||
|
||||
- Recent machine state changes
|
||||
- MQTT message timestamps
|
||||
- Message payloads
|
||||
|
||||
### Recording Files
|
||||
|
||||
Check the storage section for automatically created recording files:
|
||||
|
||||
- Files will be named with pattern: `auto_{machine_name}_{timestamp}.avi`
|
||||
- Example: `auto_vibratory_conveyor_2025-07-29T10-30-45-123Z.avi`
|
||||
|
||||
## 🔍 Troubleshooting
|
||||
|
||||
### Auto-Recording Not Starting
|
||||
|
||||
1. **Check Configuration**: Verify auto-recording is enabled in camera config
|
||||
2. **Check Manager Status**: Ensure auto-recording manager is running
|
||||
3. **Check MQTT Connection**: Verify MQTT client is connected
|
||||
4. **Check Machine Topic**: Ensure camera's machine_topic matches MQTT topic
|
||||
5. **Check Permissions**: Ensure you have admin access
|
||||
|
||||
### Recording Not Stopping
|
||||
|
||||
1. **Check MQTT Messages**: Verify "off" messages are being received
|
||||
2. **Check Manager Logs**: Look for error messages in browser console
|
||||
3. **Manual Stop**: Use manual stop recording if needed
|
||||
|
||||
### Performance Issues
|
||||
|
||||
1. **Polling Interval**: The manager polls MQTT events every 2 seconds by default
|
||||
2. **Event Processing**: Only new events since last poll are processed
|
||||
3. **Error Handling**: Failed operations are logged but don't stop the manager
|
||||
|
||||
## 🔧 Configuration Options
|
||||
|
||||
### Camera Configuration Fields
|
||||
|
||||
```json
|
||||
{
|
||||
"auto_record_on_machine_start": true, // Enable/disable auto-recording
|
||||
"machine_topic": "vibratory_conveyor", // MQTT topic to monitor
|
||||
// ... other camera settings
|
||||
}
|
||||
```
|
||||
|
||||
### Auto-Recording Manager Settings
|
||||
|
||||
- **Polling Interval**: 2000ms (configurable in code)
|
||||
- **Event Batch Size**: 50 events per poll
|
||||
- **Filename Pattern**: `auto_{machine_name}_{timestamp}.avi`
|
||||
|
||||
## 📝 API Endpoints
|
||||
|
||||
### Camera Configuration
|
||||
|
||||
- `GET /cameras/{camera_name}/config` - Get camera configuration
|
||||
- `PUT /cameras/{camera_name}/config` - Update camera configuration
|
||||
|
||||
### Recording Control
|
||||
|
||||
- `POST /cameras/{camera_name}/start-recording` - Start recording
|
||||
- `POST /cameras/{camera_name}/stop-recording` - Stop recording
|
||||
|
||||
### MQTT Monitoring
|
||||
|
||||
- `GET /mqtt/events?limit=50` - Get recent MQTT events
|
||||
- `GET /machines` - Get machine states
|
||||
|
||||
## 🚨 Important Notes
|
||||
|
||||
1. **Admin Access Required**: Auto-recording configuration requires admin privileges
|
||||
2. **Backend Integration**: This frontend implementation requires corresponding backend support
|
||||
3. **MQTT Dependency**: Functionality depends on stable MQTT connection
|
||||
4. **Storage Space**: Monitor storage usage as auto-recording can generate many files
|
||||
5. **Network Reliability**: Ensure stable network connection for MQTT messages
|
||||
|
||||
## 🔄 Future Enhancements
|
||||
|
||||
Potential improvements for the auto-recording system:
|
||||
|
||||
1. **Recording Schedules**: Time-based recording rules
|
||||
2. **Storage Management**: Automatic cleanup of old recordings
|
||||
3. **Alert System**: Notifications for recording failures
|
||||
4. **Advanced Triggers**: Multiple machine dependencies
|
||||
5. **Recording Profiles**: Different settings per machine state
|
||||
162
src/components/AutoRecordingStatus.tsx
Normal file
162
src/components/AutoRecordingStatus.tsx
Normal file
@@ -0,0 +1,162 @@
|
||||
import { memo, useState, useEffect } from 'react'
|
||||
import { visionApi, type AutoRecordingStatusResponse } from '../lib/visionApi'
|
||||
import { useAuth } from '../hooks/useAuth'
|
||||
|
||||
const AutoRecordingStatus = memo(() => {
|
||||
const { isAdmin } = useAuth()
|
||||
const isAdminUser = isAdmin()
|
||||
const [status, setStatus] = useState<AutoRecordingStatusResponse | null>(null)
|
||||
const [loading, setLoading] = useState(false)
|
||||
const [error, setError] = useState<string | null>(null)
|
||||
|
||||
// Fetch auto-recording status
|
||||
const fetchStatus = async () => {
|
||||
try {
|
||||
setLoading(true)
|
||||
setError(null)
|
||||
const statusData = await visionApi.getAutoRecordingStatus()
|
||||
setStatus(statusData)
|
||||
} catch (err) {
|
||||
const errorMessage = err instanceof Error ? err.message : 'Failed to fetch auto-recording status'
|
||||
setError(errorMessage)
|
||||
console.error('Failed to fetch auto-recording status:', err)
|
||||
} finally {
|
||||
setLoading(false)
|
||||
}
|
||||
}
|
||||
|
||||
// Fetch status on mount and set up polling
|
||||
useEffect(() => {
|
||||
if (!isAdminUser) {
|
||||
return
|
||||
}
|
||||
|
||||
fetchStatus()
|
||||
const interval = setInterval(fetchStatus, 10000) // Poll every 10 seconds
|
||||
return () => clearInterval(interval)
|
||||
}, [isAdminUser])
|
||||
|
||||
// Only show to admins
|
||||
if (!isAdminUser) {
|
||||
return null
|
||||
}
|
||||
|
||||
return (
|
||||
<div className="bg-white shadow rounded-lg">
|
||||
<div className="px-4 py-5 sm:px-6">
|
||||
<div className="flex items-center justify-between">
|
||||
<div>
|
||||
<h3 className="text-lg leading-6 font-medium text-gray-900">Auto-Recording System</h3>
|
||||
<p className="mt-1 max-w-2xl text-sm text-gray-500">
|
||||
Server-side automatic recording based on machine state changes
|
||||
</p>
|
||||
</div>
|
||||
<div className="flex items-center space-x-2">
|
||||
<div className={`inline-flex items-center px-2.5 py-0.5 rounded-full text-xs font-medium ${status?.running ? 'bg-green-100 text-green-800' : 'bg-gray-100 text-gray-800'
|
||||
}`}>
|
||||
{status?.running ? 'Running' : 'Stopped'}
|
||||
</div>
|
||||
<button
|
||||
onClick={fetchStatus}
|
||||
disabled={loading}
|
||||
className="bg-indigo-600 text-white px-3 py-1 rounded-md text-sm hover:bg-indigo-700 focus:outline-none focus:ring-2 focus:ring-indigo-500 disabled:opacity-50"
|
||||
>
|
||||
{loading ? 'Refreshing...' : 'Refresh'}
|
||||
</button>
|
||||
</div>
|
||||
</div>
|
||||
</div>
|
||||
|
||||
{error && (
|
||||
<div className="px-4 py-3 border-t border-gray-200">
|
||||
<div className="bg-red-50 border border-red-200 rounded-md p-3">
|
||||
<p className="text-red-800 text-sm">{error}</p>
|
||||
</div>
|
||||
</div>
|
||||
)}
|
||||
|
||||
{status && (
|
||||
<div className="border-t border-gray-200">
|
||||
<div className="px-4 py-5 sm:px-6">
|
||||
<h4 className="text-md font-medium text-gray-900 mb-3">System Status</h4>
|
||||
<div className="grid grid-cols-1 md:grid-cols-2 gap-4">
|
||||
<div className="space-y-2 text-sm">
|
||||
<div className="flex justify-between">
|
||||
<span className="text-gray-500">System Running:</span>
|
||||
<span className={`font-medium ${status.running ? 'text-green-600' : 'text-red-600'
|
||||
}`}>
|
||||
{status.running ? 'YES' : 'NO'}
|
||||
</span>
|
||||
</div>
|
||||
<div className="flex justify-between">
|
||||
<span className="text-gray-500">Auto-Recording Enabled:</span>
|
||||
<span className={`font-medium ${status.auto_recording_enabled ? 'text-green-600' : 'text-gray-600'
|
||||
}`}>
|
||||
{status.auto_recording_enabled ? 'YES' : 'NO'}
|
||||
</span>
|
||||
</div>
|
||||
<div className="flex justify-between">
|
||||
<span className="text-gray-500">Enabled Cameras:</span>
|
||||
<span className="font-medium text-gray-900">
|
||||
{status.enabled_cameras.length}
|
||||
</span>
|
||||
</div>
|
||||
</div>
|
||||
|
||||
<div className="space-y-2 text-sm">
|
||||
<div className="flex justify-between">
|
||||
<span className="text-gray-500">Retry Queue:</span>
|
||||
<span className="font-medium text-gray-900">
|
||||
{Object.keys(status.retry_queue).length} items
|
||||
</span>
|
||||
</div>
|
||||
</div>
|
||||
</div>
|
||||
|
||||
{status.enabled_cameras.length > 0 && (
|
||||
<div className="mt-4">
|
||||
<h5 className="text-sm font-medium text-gray-900 mb-2">Enabled Cameras:</h5>
|
||||
<div className="flex flex-wrap gap-2">
|
||||
{status.enabled_cameras.map((camera) => (
|
||||
<span
|
||||
key={camera}
|
||||
className="inline-flex items-center px-2.5 py-0.5 rounded-full text-xs font-medium bg-green-100 text-green-800"
|
||||
>
|
||||
{camera}
|
||||
</span>
|
||||
))}
|
||||
</div>
|
||||
</div>
|
||||
)}
|
||||
|
||||
{Object.keys(status.retry_queue).length > 0 && (
|
||||
<div className="mt-4">
|
||||
<h5 className="text-sm font-medium text-gray-900 mb-2">Retry Queue:</h5>
|
||||
<div className="space-y-1">
|
||||
{Object.entries(status.retry_queue).map(([camera, retryInfo]) => (
|
||||
<div key={camera} className="text-xs text-gray-600 bg-yellow-50 p-2 rounded">
|
||||
<strong>{camera}:</strong> {JSON.stringify(retryInfo)}
|
||||
</div>
|
||||
))}
|
||||
</div>
|
||||
</div>
|
||||
)}
|
||||
</div>
|
||||
</div>
|
||||
)}
|
||||
|
||||
{!status && !loading && !error && (
|
||||
<div className="border-t border-gray-200 px-4 py-5 sm:px-6">
|
||||
<div className="text-center text-gray-500">
|
||||
<p>Auto-recording status not available</p>
|
||||
<p className="text-sm mt-1">Click "Refresh" to fetch the current status</p>
|
||||
</div>
|
||||
</div>
|
||||
)}
|
||||
</div>
|
||||
)
|
||||
})
|
||||
|
||||
AutoRecordingStatus.displayName = 'AutoRecordingStatus'
|
||||
|
||||
export { AutoRecordingStatus }
|
||||
193
src/components/AutoRecordingTest.tsx
Normal file
193
src/components/AutoRecordingTest.tsx
Normal file
@@ -0,0 +1,193 @@
|
||||
/**
|
||||
* Auto-Recording Test Component
|
||||
*
|
||||
* This component provides a testing interface for the auto-recording functionality.
|
||||
* It allows admins to simulate MQTT events and verify auto-recording behavior.
|
||||
*/
|
||||
|
||||
import { useState } from 'react'
|
||||
import { visionApi } from '../lib/visionApi'
|
||||
import { useAuth } from '../hooks/useAuth'
|
||||
|
||||
interface TestEvent {
|
||||
machine: string
|
||||
state: 'on' | 'off'
|
||||
timestamp: Date
|
||||
result?: string
|
||||
}
|
||||
|
||||
export function AutoRecordingTest() {
|
||||
const { isAdmin } = useAuth()
|
||||
const [testEvents, setTestEvents] = useState<TestEvent[]>([])
|
||||
const [isLoading, setIsLoading] = useState(false)
|
||||
|
||||
if (!isAdmin()) {
|
||||
return null
|
||||
}
|
||||
|
||||
const simulateEvent = async (machine: string, state: 'on' | 'off') => {
|
||||
setIsLoading(true)
|
||||
|
||||
const event: TestEvent = {
|
||||
machine,
|
||||
state,
|
||||
timestamp: new Date()
|
||||
}
|
||||
|
||||
try {
|
||||
// Map machines to their corresponding cameras
|
||||
const machineToCamera: Record<string, string> = {
|
||||
'blower_separator': 'camera1', // camera1 is for blower separator
|
||||
'vibratory_conveyor': 'camera2' // camera2 is for conveyor
|
||||
}
|
||||
|
||||
const cameraName = machineToCamera[machine]
|
||||
if (!cameraName) {
|
||||
event.result = `❌ Error: No camera mapped for machine ${machine}`
|
||||
setTestEvents(prev => [event, ...prev.slice(0, 9)])
|
||||
setIsLoading(false)
|
||||
return
|
||||
}
|
||||
|
||||
if (state === 'on') {
|
||||
// Simulate starting recording on the correct camera
|
||||
const result = await visionApi.startRecording(cameraName, {
|
||||
filename: `test_auto_${machine}_${Date.now()}.avi`
|
||||
})
|
||||
event.result = result.success ? `✅ Recording started on ${cameraName}: ${result.filename}` : `❌ Failed: ${result.message}`
|
||||
} else {
|
||||
// Simulate stopping recording on the correct camera
|
||||
const result = await visionApi.stopRecording(cameraName)
|
||||
event.result = result.success ? `⏹️ Recording stopped on ${cameraName} (${result.duration_seconds}s)` : `❌ Failed: ${result.message}`
|
||||
}
|
||||
} catch (error) {
|
||||
event.result = `❌ Error: ${error instanceof Error ? error.message : 'Unknown error'}`
|
||||
}
|
||||
|
||||
setTestEvents(prev => [event, ...prev.slice(0, 9)]) // Keep last 10 events
|
||||
setIsLoading(false)
|
||||
}
|
||||
|
||||
const clearEvents = () => {
|
||||
setTestEvents([])
|
||||
}
|
||||
|
||||
return (
|
||||
<div className="bg-white shadow rounded-lg">
|
||||
<div className="px-4 py-5 sm:px-6">
|
||||
<h3 className="text-lg leading-6 font-medium text-gray-900">Auto-Recording Test</h3>
|
||||
<p className="mt-1 max-w-2xl text-sm text-gray-500">
|
||||
Simulate machine state changes to test auto-recording functionality
|
||||
</p>
|
||||
</div>
|
||||
|
||||
<div className="border-t border-gray-200 px-4 py-5 sm:px-6">
|
||||
<div className="space-y-4">
|
||||
{/* Test Controls */}
|
||||
<div>
|
||||
<h4 className="text-md font-medium text-gray-900 mb-3">Simulate Machine Events</h4>
|
||||
<div className="grid grid-cols-2 md:grid-cols-4 gap-3">
|
||||
<button
|
||||
onClick={() => simulateEvent('vibratory_conveyor', 'on')}
|
||||
disabled={isLoading}
|
||||
className="bg-green-600 text-white px-3 py-2 rounded-md text-sm hover:bg-green-700 disabled:opacity-50 disabled:cursor-not-allowed"
|
||||
>
|
||||
Conveyor ON
|
||||
</button>
|
||||
<button
|
||||
onClick={() => simulateEvent('vibratory_conveyor', 'off')}
|
||||
disabled={isLoading}
|
||||
className="bg-red-600 text-white px-3 py-2 rounded-md text-sm hover:bg-red-700 disabled:opacity-50 disabled:cursor-not-allowed"
|
||||
>
|
||||
Conveyor OFF
|
||||
</button>
|
||||
<button
|
||||
onClick={() => simulateEvent('blower_separator', 'on')}
|
||||
disabled={isLoading}
|
||||
className="bg-green-600 text-white px-3 py-2 rounded-md text-sm hover:bg-green-700 disabled:opacity-50 disabled:cursor-not-allowed"
|
||||
>
|
||||
Blower ON
|
||||
</button>
|
||||
<button
|
||||
onClick={() => simulateEvent('blower_separator', 'off')}
|
||||
disabled={isLoading}
|
||||
className="bg-red-600 text-white px-3 py-2 rounded-md text-sm hover:bg-red-700 disabled:opacity-50 disabled:cursor-not-allowed"
|
||||
>
|
||||
Blower OFF
|
||||
</button>
|
||||
</div>
|
||||
</div>
|
||||
|
||||
{/* Clear Button */}
|
||||
{testEvents.length > 0 && (
|
||||
<div className="flex justify-end">
|
||||
<button
|
||||
onClick={clearEvents}
|
||||
className="bg-gray-600 text-white px-3 py-2 rounded-md text-sm hover:bg-gray-700"
|
||||
>
|
||||
Clear Events
|
||||
</button>
|
||||
</div>
|
||||
)}
|
||||
|
||||
{/* Test Results */}
|
||||
{testEvents.length > 0 && (
|
||||
<div>
|
||||
<h4 className="text-md font-medium text-gray-900 mb-3">Test Results</h4>
|
||||
<div className="space-y-2">
|
||||
{testEvents.map((event, index) => (
|
||||
<div key={index} className="border border-gray-200 rounded-lg p-3">
|
||||
<div className="flex items-center justify-between">
|
||||
<div className="flex items-center space-x-3">
|
||||
<span className="text-sm font-medium text-gray-900">
|
||||
{event.machine.replace(/_/g, ' ')}
|
||||
</span>
|
||||
<span className={`inline-flex items-center px-2 py-0.5 rounded text-xs font-medium ${event.state === 'on'
|
||||
? 'bg-green-100 text-green-800'
|
||||
: 'bg-red-100 text-red-800'
|
||||
}`}>
|
||||
{event.state.toUpperCase()}
|
||||
</span>
|
||||
<span className="text-xs text-gray-500">
|
||||
{event.timestamp.toLocaleTimeString()}
|
||||
</span>
|
||||
</div>
|
||||
</div>
|
||||
{event.result && (
|
||||
<div className="mt-2 text-sm text-gray-700">
|
||||
{event.result}
|
||||
</div>
|
||||
)}
|
||||
</div>
|
||||
))}
|
||||
</div>
|
||||
</div>
|
||||
)}
|
||||
|
||||
{/* Instructions */}
|
||||
<div className="bg-blue-50 border border-blue-200 rounded-md p-4">
|
||||
<h4 className="text-sm font-medium text-blue-900 mb-2">Testing Instructions</h4>
|
||||
<ul className="text-sm text-blue-800 space-y-1">
|
||||
<li>1. Ensure auto-recording is enabled for cameras in their configuration</li>
|
||||
<li>2. Start the auto-recording manager in the Vision System page</li>
|
||||
<li>3. Click the buttons above to simulate machine state changes</li>
|
||||
<li>4. Verify that recordings start/stop automatically</li>
|
||||
<li>5. Check the storage section for auto-generated recording files</li>
|
||||
</ul>
|
||||
</div>
|
||||
|
||||
{/* Expected Behavior */}
|
||||
<div className="bg-gray-50 border border-gray-200 rounded-md p-4">
|
||||
<h4 className="text-sm font-medium text-gray-900 mb-2">Expected Behavior</h4>
|
||||
<div className="text-sm text-gray-700 space-y-1">
|
||||
<div><strong>Conveyor ON:</strong> Camera2 should start recording automatically</div>
|
||||
<div><strong>Conveyor OFF:</strong> Camera2 should stop recording automatically</div>
|
||||
<div><strong>Blower ON:</strong> Camera1 should start recording automatically</div>
|
||||
<div><strong>Blower OFF:</strong> Camera1 should stop recording automatically</div>
|
||||
</div>
|
||||
</div>
|
||||
</div>
|
||||
</div>
|
||||
</div>
|
||||
)
|
||||
}
|
||||
587
src/components/CameraConfigModal.tsx
Normal file
587
src/components/CameraConfigModal.tsx
Normal file
@@ -0,0 +1,587 @@
|
||||
import { useState, useEffect } from 'react'
|
||||
import { visionApi, type CameraConfig, type CameraConfigUpdate } from '../lib/visionApi'
|
||||
|
||||
interface CameraConfigModalProps {
|
||||
cameraName: string
|
||||
isOpen: boolean
|
||||
onClose: () => void
|
||||
onSuccess?: (message: string) => void
|
||||
onError?: (error: string) => void
|
||||
}
|
||||
|
||||
export function CameraConfigModal({ cameraName, isOpen, onClose, onSuccess, onError }: CameraConfigModalProps) {
|
||||
const [config, setConfig] = useState<CameraConfig | null>(null)
|
||||
const [loading, setLoading] = useState(false)
|
||||
const [saving, setSaving] = useState(false)
|
||||
const [applying, setApplying] = useState(false)
|
||||
const [error, setError] = useState<string | null>(null)
|
||||
const [hasChanges, setHasChanges] = useState(false)
|
||||
const [originalConfig, setOriginalConfig] = useState<CameraConfig | null>(null)
|
||||
|
||||
useEffect(() => {
|
||||
if (isOpen && cameraName) {
|
||||
loadConfig()
|
||||
}
|
||||
}, [isOpen, cameraName])
|
||||
|
||||
const loadConfig = async () => {
|
||||
try {
|
||||
setLoading(true)
|
||||
setError(null)
|
||||
const configData = await visionApi.getCameraConfig(cameraName)
|
||||
setConfig(configData)
|
||||
setOriginalConfig(configData)
|
||||
setHasChanges(false)
|
||||
} catch (err) {
|
||||
const errorMessage = err instanceof Error ? err.message : 'Failed to load camera configuration'
|
||||
setError(errorMessage)
|
||||
onError?.(errorMessage)
|
||||
} finally {
|
||||
setLoading(false)
|
||||
}
|
||||
}
|
||||
|
||||
const updateSetting = (key: keyof CameraConfigUpdate, value: number | boolean) => {
|
||||
if (!config) return
|
||||
|
||||
const newConfig = { ...config, [key]: value }
|
||||
setConfig(newConfig)
|
||||
|
||||
// Check if there are changes from original
|
||||
const hasChanges = originalConfig && Object.keys(newConfig).some(k => {
|
||||
const configKey = k as keyof CameraConfig
|
||||
return newConfig[configKey] !== originalConfig[configKey]
|
||||
})
|
||||
setHasChanges(!!hasChanges)
|
||||
}
|
||||
|
||||
const saveConfig = async () => {
|
||||
if (!config || !originalConfig) return
|
||||
|
||||
try {
|
||||
setSaving(true)
|
||||
setError(null)
|
||||
|
||||
// Build update object with only changed values
|
||||
const updates: CameraConfigUpdate = {}
|
||||
const configKeys: (keyof CameraConfigUpdate)[] = [
|
||||
'exposure_ms', 'gain', 'target_fps', 'sharpness', 'contrast', 'saturation',
|
||||
'gamma', 'noise_filter_enabled', 'denoise_3d_enabled', 'auto_white_balance',
|
||||
'color_temperature_preset', 'anti_flicker_enabled', 'light_frequency',
|
||||
'hdr_enabled', 'hdr_gain_mode', 'auto_record_on_machine_start',
|
||||
'auto_start_recording_enabled', 'auto_recording_max_retries', 'auto_recording_retry_delay_seconds'
|
||||
]
|
||||
|
||||
configKeys.forEach(key => {
|
||||
if (config[key] !== originalConfig[key]) {
|
||||
updates[key] = config[key] as any
|
||||
}
|
||||
})
|
||||
|
||||
if (Object.keys(updates).length === 0) {
|
||||
onSuccess?.('No changes to save')
|
||||
return
|
||||
}
|
||||
|
||||
const result = await visionApi.updateCameraConfig(cameraName, updates)
|
||||
|
||||
if (result.success) {
|
||||
setOriginalConfig(config)
|
||||
setHasChanges(false)
|
||||
onSuccess?.(`Configuration updated: ${result.updated_settings.join(', ')}`)
|
||||
} else {
|
||||
throw new Error(result.message)
|
||||
}
|
||||
} catch (err) {
|
||||
const errorMessage = err instanceof Error ? err.message : 'Failed to save configuration'
|
||||
setError(errorMessage)
|
||||
onError?.(errorMessage)
|
||||
} finally {
|
||||
setSaving(false)
|
||||
}
|
||||
}
|
||||
|
||||
const applyConfig = async () => {
|
||||
try {
|
||||
setApplying(true)
|
||||
setError(null)
|
||||
|
||||
const result = await visionApi.applyCameraConfig(cameraName)
|
||||
|
||||
if (result.success) {
|
||||
onSuccess?.('Configuration applied successfully. Camera restarted.')
|
||||
} else {
|
||||
throw new Error(result.message)
|
||||
}
|
||||
} catch (err) {
|
||||
const errorMessage = err instanceof Error ? err.message : 'Failed to apply configuration'
|
||||
setError(errorMessage)
|
||||
onError?.(errorMessage)
|
||||
} finally {
|
||||
setApplying(false)
|
||||
}
|
||||
}
|
||||
|
||||
const resetChanges = () => {
|
||||
if (originalConfig) {
|
||||
setConfig(originalConfig)
|
||||
setHasChanges(false)
|
||||
}
|
||||
}
|
||||
|
||||
if (!isOpen) return null
|
||||
|
||||
return (
|
||||
<div className="fixed inset-0 bg-black bg-opacity-50 flex items-center justify-center z-50">
|
||||
<div className="bg-white rounded-lg shadow-xl max-w-4xl w-full mx-4 max-h-[90vh] overflow-hidden">
|
||||
{/* Header */}
|
||||
<div className="px-6 py-4 border-b border-gray-200">
|
||||
<div className="flex items-center justify-between">
|
||||
<h3 className="text-lg font-medium text-gray-900">
|
||||
Camera Configuration - {cameraName}
|
||||
</h3>
|
||||
<button
|
||||
onClick={onClose}
|
||||
className="text-gray-400 hover:text-gray-600"
|
||||
>
|
||||
<svg className="w-6 h-6" fill="none" stroke="currentColor" viewBox="0 0 24 24">
|
||||
<path strokeLinecap="round" strokeLinejoin="round" strokeWidth={2} d="M6 18L18 6M6 6l12 12" />
|
||||
</svg>
|
||||
</button>
|
||||
</div>
|
||||
</div>
|
||||
|
||||
{/* Content */}
|
||||
<div className="px-6 py-4 overflow-y-auto max-h-[calc(90vh-140px)]">
|
||||
{loading && (
|
||||
<div className="flex items-center justify-center py-8">
|
||||
<div className="animate-spin rounded-full h-8 w-8 border-b-2 border-indigo-600"></div>
|
||||
<span className="ml-2 text-gray-600">Loading configuration...</span>
|
||||
</div>
|
||||
)}
|
||||
|
||||
{error && (
|
||||
<div className="mb-4 p-4 bg-red-50 border border-red-200 rounded-md">
|
||||
<p className="text-red-800">{error}</p>
|
||||
</div>
|
||||
)}
|
||||
|
||||
{config && !loading && (
|
||||
<div className="space-y-6">
|
||||
{/* Basic Settings */}
|
||||
<div>
|
||||
<h4 className="text-md font-medium text-gray-900 mb-4">Basic Settings</h4>
|
||||
<div className="grid grid-cols-1 md:grid-cols-2 gap-4">
|
||||
<div>
|
||||
<label className="block text-sm font-medium text-gray-700 mb-2">
|
||||
Exposure (ms): {config.exposure_ms}
|
||||
</label>
|
||||
<input
|
||||
type="range"
|
||||
min="0.1"
|
||||
max="10"
|
||||
step="0.1"
|
||||
value={config.exposure_ms}
|
||||
onChange={(e) => updateSetting('exposure_ms', parseFloat(e.target.value))}
|
||||
className="w-full"
|
||||
/>
|
||||
<div className="flex justify-between text-xs text-gray-500 mt-1">
|
||||
<span>0.1ms</span>
|
||||
<span>10ms</span>
|
||||
</div>
|
||||
</div>
|
||||
|
||||
<div>
|
||||
<label className="block text-sm font-medium text-gray-700 mb-2">
|
||||
Gain: {config.gain}
|
||||
</label>
|
||||
<input
|
||||
type="range"
|
||||
min="0"
|
||||
max="10"
|
||||
step="0.1"
|
||||
value={config.gain}
|
||||
onChange={(e) => updateSetting('gain', parseFloat(e.target.value))}
|
||||
className="w-full"
|
||||
/>
|
||||
<div className="flex justify-between text-xs text-gray-500 mt-1">
|
||||
<span>0</span>
|
||||
<span>10</span>
|
||||
</div>
|
||||
</div>
|
||||
|
||||
<div>
|
||||
<label className="block text-sm font-medium text-gray-700 mb-2">
|
||||
Target FPS: {config.target_fps} {config.target_fps === 0 ? '(Maximum)' : ''}
|
||||
</label>
|
||||
<input
|
||||
type="range"
|
||||
min="0"
|
||||
max="30"
|
||||
step="1"
|
||||
value={config.target_fps}
|
||||
onChange={(e) => updateSetting('target_fps', parseInt(e.target.value))}
|
||||
className="w-full"
|
||||
/>
|
||||
<div className="flex justify-between text-xs text-gray-500 mt-1">
|
||||
<span>0 (Max)</span>
|
||||
<span>30</span>
|
||||
</div>
|
||||
</div>
|
||||
</div>
|
||||
</div>
|
||||
|
||||
{/* Image Quality Settings */}
|
||||
<div>
|
||||
<h4 className="text-md font-medium text-gray-900 mb-4">Image Quality</h4>
|
||||
<div className="grid grid-cols-1 md:grid-cols-2 gap-4">
|
||||
<div>
|
||||
<label className="block text-sm font-medium text-gray-700 mb-2">
|
||||
Sharpness: {config.sharpness}
|
||||
</label>
|
||||
<input
|
||||
type="range"
|
||||
min="0"
|
||||
max="200"
|
||||
value={config.sharpness}
|
||||
onChange={(e) => updateSetting('sharpness', parseInt(e.target.value))}
|
||||
className="w-full"
|
||||
/>
|
||||
<div className="flex justify-between text-xs text-gray-500 mt-1">
|
||||
<span>0</span>
|
||||
<span>200</span>
|
||||
</div>
|
||||
</div>
|
||||
|
||||
<div>
|
||||
<label className="block text-sm font-medium text-gray-700 mb-2">
|
||||
Contrast: {config.contrast}
|
||||
</label>
|
||||
<input
|
||||
type="range"
|
||||
min="0"
|
||||
max="200"
|
||||
value={config.contrast}
|
||||
onChange={(e) => updateSetting('contrast', parseInt(e.target.value))}
|
||||
className="w-full"
|
||||
/>
|
||||
<div className="flex justify-between text-xs text-gray-500 mt-1">
|
||||
<span>0</span>
|
||||
<span>200</span>
|
||||
</div>
|
||||
</div>
|
||||
|
||||
<div>
|
||||
<label className="block text-sm font-medium text-gray-700 mb-2">
|
||||
Saturation: {config.saturation}
|
||||
</label>
|
||||
<input
|
||||
type="range"
|
||||
min="0"
|
||||
max="200"
|
||||
value={config.saturation}
|
||||
onChange={(e) => updateSetting('saturation', parseInt(e.target.value))}
|
||||
className="w-full"
|
||||
/>
|
||||
<div className="flex justify-between text-xs text-gray-500 mt-1">
|
||||
<span>0</span>
|
||||
<span>200</span>
|
||||
</div>
|
||||
</div>
|
||||
|
||||
<div>
|
||||
<label className="block text-sm font-medium text-gray-700 mb-2">
|
||||
Gamma: {config.gamma}
|
||||
</label>
|
||||
<input
|
||||
type="range"
|
||||
min="0"
|
||||
max="300"
|
||||
value={config.gamma}
|
||||
onChange={(e) => updateSetting('gamma', parseInt(e.target.value))}
|
||||
className="w-full"
|
||||
/>
|
||||
<div className="flex justify-between text-xs text-gray-500 mt-1">
|
||||
<span>0</span>
|
||||
<span>300</span>
|
||||
</div>
|
||||
</div>
|
||||
</div>
|
||||
</div>
|
||||
|
||||
{/* Color Settings */}
|
||||
<div>
|
||||
<h4 className="text-md font-medium text-gray-900 mb-4">Color Settings</h4>
|
||||
<div className="grid grid-cols-1 md:grid-cols-2 gap-4">
|
||||
<div>
|
||||
<label className="flex items-center space-x-2">
|
||||
<input
|
||||
type="checkbox"
|
||||
checked={config.auto_white_balance}
|
||||
onChange={(e) => updateSetting('auto_white_balance', e.target.checked)}
|
||||
className="rounded border-gray-300 text-indigo-600 focus:ring-indigo-500"
|
||||
/>
|
||||
<span className="text-sm font-medium text-gray-700">Auto White Balance</span>
|
||||
</label>
|
||||
</div>
|
||||
|
||||
<div>
|
||||
<label className="block text-sm font-medium text-gray-700 mb-2">
|
||||
Color Temperature Preset: {config.color_temperature_preset} {config.color_temperature_preset === 0 ? '(Auto)' : ''}
|
||||
</label>
|
||||
<input
|
||||
type="range"
|
||||
min="0"
|
||||
max="10"
|
||||
value={config.color_temperature_preset}
|
||||
onChange={(e) => updateSetting('color_temperature_preset', parseInt(e.target.value))}
|
||||
className="w-full"
|
||||
/>
|
||||
<div className="flex justify-between text-xs text-gray-500 mt-1">
|
||||
<span>0 (Auto)</span>
|
||||
<span>10</span>
|
||||
</div>
|
||||
</div>
|
||||
</div>
|
||||
</div>
|
||||
|
||||
{/* Advanced Settings */}
|
||||
<div>
|
||||
<h4 className="text-md font-medium text-gray-900 mb-4">Advanced Settings</h4>
|
||||
<div className="grid grid-cols-1 md:grid-cols-2 gap-4">
|
||||
<div>
|
||||
<label className="flex items-center space-x-2">
|
||||
<input
|
||||
type="checkbox"
|
||||
checked={config.anti_flicker_enabled}
|
||||
onChange={(e) => updateSetting('anti_flicker_enabled', e.target.checked)}
|
||||
className="rounded border-gray-300 text-indigo-600 focus:ring-indigo-500"
|
||||
/>
|
||||
<span className="text-sm font-medium text-gray-700">Anti-flicker Enabled</span>
|
||||
</label>
|
||||
</div>
|
||||
|
||||
<div>
|
||||
<label className="block text-sm font-medium text-gray-700 mb-2">
|
||||
Light Frequency: {config.light_frequency === 0 ? '50Hz' : '60Hz'}
|
||||
</label>
|
||||
<select
|
||||
value={config.light_frequency}
|
||||
onChange={(e) => updateSetting('light_frequency', parseInt(e.target.value))}
|
||||
className="w-full border-gray-300 rounded-md focus:ring-indigo-500 focus:border-indigo-500"
|
||||
>
|
||||
<option value={0}>50Hz</option>
|
||||
<option value={1}>60Hz</option>
|
||||
</select>
|
||||
</div>
|
||||
|
||||
<div>
|
||||
<label className="flex items-center space-x-2">
|
||||
<input
|
||||
type="checkbox"
|
||||
checked={config.noise_filter_enabled}
|
||||
onChange={(e) => updateSetting('noise_filter_enabled', e.target.checked)}
|
||||
className="rounded border-gray-300 text-indigo-600 focus:ring-indigo-500"
|
||||
/>
|
||||
<span className="text-sm font-medium text-gray-700">Noise Filter Enabled</span>
|
||||
</label>
|
||||
<p className="text-xs text-gray-500 mt-1">Requires restart to apply</p>
|
||||
</div>
|
||||
|
||||
<div>
|
||||
<label className="flex items-center space-x-2">
|
||||
<input
|
||||
type="checkbox"
|
||||
checked={config.denoise_3d_enabled}
|
||||
onChange={(e) => updateSetting('denoise_3d_enabled', e.target.checked)}
|
||||
className="rounded border-gray-300 text-indigo-600 focus:ring-indigo-500"
|
||||
/>
|
||||
<span className="text-sm font-medium text-gray-700">3D Denoise Enabled</span>
|
||||
</label>
|
||||
<p className="text-xs text-gray-500 mt-1">Requires restart to apply</p>
|
||||
</div>
|
||||
</div>
|
||||
</div>
|
||||
|
||||
{/* HDR Settings */}
|
||||
<div>
|
||||
<h4 className="text-md font-medium text-gray-900 mb-4">HDR Settings</h4>
|
||||
<div className="grid grid-cols-1 md:grid-cols-2 gap-4">
|
||||
<div>
|
||||
<label className="flex items-center space-x-2">
|
||||
<input
|
||||
type="checkbox"
|
||||
checked={config.hdr_enabled}
|
||||
onChange={(e) => updateSetting('hdr_enabled', e.target.checked)}
|
||||
className="rounded border-gray-300 text-indigo-600 focus:ring-indigo-500"
|
||||
/>
|
||||
<span className="text-sm font-medium text-gray-700">HDR Enabled</span>
|
||||
</label>
|
||||
</div>
|
||||
|
||||
<div>
|
||||
<label className="block text-sm font-medium text-gray-700 mb-2">
|
||||
HDR Gain Mode: {config.hdr_gain_mode}
|
||||
</label>
|
||||
<input
|
||||
type="range"
|
||||
min="0"
|
||||
max="3"
|
||||
value={config.hdr_gain_mode}
|
||||
onChange={(e) => updateSetting('hdr_gain_mode', parseInt(e.target.value))}
|
||||
className="w-full"
|
||||
disabled={!config.hdr_enabled}
|
||||
/>
|
||||
<div className="flex justify-between text-xs text-gray-500 mt-1">
|
||||
<span>0</span>
|
||||
<span>3</span>
|
||||
</div>
|
||||
</div>
|
||||
</div>
|
||||
</div>
|
||||
|
||||
{/* Auto-Recording Settings */}
|
||||
<div>
|
||||
<h4 className="text-md font-medium text-gray-900 mb-4">Auto-Recording Settings</h4>
|
||||
<div className="grid grid-cols-1 md:grid-cols-2 gap-4">
|
||||
<div>
|
||||
<label className="flex items-center space-x-2">
|
||||
<input
|
||||
type="checkbox"
|
||||
checked={config.auto_record_on_machine_start}
|
||||
onChange={(e) => updateSetting('auto_record_on_machine_start', e.target.checked)}
|
||||
className="rounded border-gray-300 text-indigo-600 focus:ring-indigo-500"
|
||||
/>
|
||||
<span className="text-sm font-medium text-gray-700">Auto Record on Machine Start</span>
|
||||
</label>
|
||||
<p className="text-xs text-gray-500 mt-1">Start recording when MQTT machine state changes to ON</p>
|
||||
</div>
|
||||
|
||||
<div>
|
||||
<label className="flex items-center space-x-2">
|
||||
<input
|
||||
type="checkbox"
|
||||
checked={config.auto_start_recording_enabled ?? false}
|
||||
onChange={(e) => updateSetting('auto_start_recording_enabled', e.target.checked)}
|
||||
className="rounded border-gray-300 text-indigo-600 focus:ring-indigo-500"
|
||||
/>
|
||||
<span className="text-sm font-medium text-gray-700">Enhanced Auto Recording</span>
|
||||
</label>
|
||||
<p className="text-xs text-gray-500 mt-1">Advanced auto-recording with retry logic</p>
|
||||
</div>
|
||||
|
||||
<div>
|
||||
<label className="block text-sm font-medium text-gray-700 mb-2">
|
||||
Max Retries: {config.auto_recording_max_retries ?? 3}
|
||||
</label>
|
||||
<input
|
||||
type="range"
|
||||
min="1"
|
||||
max="10"
|
||||
value={config.auto_recording_max_retries ?? 3}
|
||||
onChange={(e) => updateSetting('auto_recording_max_retries', parseInt(e.target.value))}
|
||||
className="w-full"
|
||||
disabled={!config.auto_start_recording_enabled}
|
||||
/>
|
||||
<div className="flex justify-between text-xs text-gray-500 mt-1">
|
||||
<span>1</span>
|
||||
<span>10</span>
|
||||
</div>
|
||||
</div>
|
||||
|
||||
<div>
|
||||
<label className="block text-sm font-medium text-gray-700 mb-2">
|
||||
Retry Delay: {config.auto_recording_retry_delay_seconds ?? 5}s
|
||||
</label>
|
||||
<input
|
||||
type="range"
|
||||
min="1"
|
||||
max="30"
|
||||
value={config.auto_recording_retry_delay_seconds ?? 5}
|
||||
onChange={(e) => updateSetting('auto_recording_retry_delay_seconds', parseInt(e.target.value))}
|
||||
className="w-full"
|
||||
disabled={!config.auto_start_recording_enabled}
|
||||
/>
|
||||
<div className="flex justify-between text-xs text-gray-500 mt-1">
|
||||
<span>1s</span>
|
||||
<span>30s</span>
|
||||
</div>
|
||||
</div>
|
||||
</div>
|
||||
</div>
|
||||
|
||||
{/* Information */}
|
||||
<div className="bg-blue-50 border border-blue-200 rounded-md p-4">
|
||||
<div className="flex">
|
||||
<div className="flex-shrink-0">
|
||||
<svg className="h-5 w-5 text-blue-400" viewBox="0 0 20 20" fill="currentColor">
|
||||
<path fillRule="evenodd" d="M18 10a8 8 0 11-16 0 8 8 0 0116 0zm-7-4a1 1 0 11-2 0 1 1 0 012 0zM9 9a1 1 0 000 2v3a1 1 0 001 1h1a1 1 0 100-2v-3a1 1 0 00-1-1H9z" clipRule="evenodd" />
|
||||
</svg>
|
||||
</div>
|
||||
<div className="ml-3">
|
||||
<h3 className="text-sm font-medium text-blue-800">Configuration Notes</h3>
|
||||
<div className="mt-2 text-sm text-blue-700">
|
||||
<ul className="list-disc list-inside space-y-1">
|
||||
<li>Real-time settings (exposure, gain, image quality) apply immediately</li>
|
||||
<li>Noise reduction settings require camera restart to take effect</li>
|
||||
<li>Use "Apply & Restart" to apply settings that require restart</li>
|
||||
<li>HDR mode may impact performance when enabled</li>
|
||||
<li>Auto-recording monitors MQTT machine state changes for automatic recording</li>
|
||||
<li>Enhanced auto-recording provides retry logic for failed recording attempts</li>
|
||||
</ul>
|
||||
</div>
|
||||
</div>
|
||||
</div>
|
||||
</div>
|
||||
</div>
|
||||
)}
|
||||
</div>
|
||||
|
||||
{/* Footer */}
|
||||
{config && !loading && (
|
||||
<div className="px-6 py-4 border-t border-gray-200 bg-gray-50">
|
||||
<div className="flex items-center justify-between">
|
||||
<div className="flex items-center space-x-2">
|
||||
{hasChanges && (
|
||||
<span className="text-sm text-orange-600 font-medium">
|
||||
You have unsaved changes
|
||||
</span>
|
||||
)}
|
||||
</div>
|
||||
<div className="flex items-center space-x-3">
|
||||
{hasChanges && (
|
||||
<button
|
||||
onClick={resetChanges}
|
||||
className="px-4 py-2 text-sm font-medium text-gray-700 bg-white border border-gray-300 rounded-md hover:bg-gray-50"
|
||||
>
|
||||
Reset
|
||||
</button>
|
||||
)}
|
||||
<button
|
||||
onClick={saveConfig}
|
||||
disabled={!hasChanges || saving}
|
||||
className="px-4 py-2 text-sm font-medium text-white bg-indigo-600 border border-transparent rounded-md hover:bg-indigo-700 disabled:opacity-50 disabled:cursor-not-allowed"
|
||||
>
|
||||
{saving ? 'Saving...' : 'Save Changes'}
|
||||
</button>
|
||||
<button
|
||||
onClick={applyConfig}
|
||||
disabled={applying}
|
||||
className="px-4 py-2 text-sm font-medium text-white bg-red-600 border border-transparent rounded-md hover:bg-red-700 disabled:opacity-50 disabled:cursor-not-allowed"
|
||||
>
|
||||
{applying ? 'Applying...' : 'Apply & Restart'}
|
||||
</button>
|
||||
<button
|
||||
onClick={onClose}
|
||||
className="px-4 py-2 text-sm font-medium text-gray-700 bg-white border border-gray-300 rounded-md hover:bg-gray-50"
|
||||
>
|
||||
Close
|
||||
</button>
|
||||
</div>
|
||||
</div>
|
||||
</div>
|
||||
)}
|
||||
</div>
|
||||
</div>
|
||||
)
|
||||
}
|
||||
194
src/components/CameraPreviewModal.tsx
Normal file
194
src/components/CameraPreviewModal.tsx
Normal file
@@ -0,0 +1,194 @@
|
||||
import { useState, useEffect, useRef } from 'react'
|
||||
import { visionApi } from '../lib/visionApi'
|
||||
|
||||
interface CameraPreviewModalProps {
|
||||
cameraName: string
|
||||
isOpen: boolean
|
||||
onClose: () => void
|
||||
onError?: (error: string) => void
|
||||
}
|
||||
|
||||
export function CameraPreviewModal({ cameraName, isOpen, onClose, onError }: CameraPreviewModalProps) {
|
||||
const [loading, setLoading] = useState(false)
|
||||
const [streaming, setStreaming] = useState(false)
|
||||
const [error, setError] = useState<string | null>(null)
|
||||
const imgRef = useRef<HTMLImageElement>(null)
|
||||
const streamUrlRef = useRef<string | null>(null)
|
||||
|
||||
// Start streaming when modal opens
|
||||
useEffect(() => {
|
||||
if (isOpen && cameraName) {
|
||||
startStreaming()
|
||||
}
|
||||
}, [isOpen, cameraName])
|
||||
|
||||
// Stop streaming when modal closes
|
||||
useEffect(() => {
|
||||
if (!isOpen && streaming) {
|
||||
stopStreaming()
|
||||
}
|
||||
}, [isOpen, streaming])
|
||||
|
||||
// Cleanup on unmount
|
||||
useEffect(() => {
|
||||
return () => {
|
||||
if (streaming) {
|
||||
stopStreaming()
|
||||
}
|
||||
}
|
||||
}, [streaming])
|
||||
|
||||
const startStreaming = async () => {
|
||||
try {
|
||||
setLoading(true)
|
||||
setError(null)
|
||||
|
||||
const result = await visionApi.startStream(cameraName)
|
||||
|
||||
if (result.success) {
|
||||
setStreaming(true)
|
||||
const streamUrl = visionApi.getStreamUrl(cameraName)
|
||||
streamUrlRef.current = streamUrl
|
||||
|
||||
// Add timestamp to prevent caching
|
||||
if (imgRef.current) {
|
||||
imgRef.current.src = `${streamUrl}?t=${Date.now()}`
|
||||
}
|
||||
} else {
|
||||
throw new Error(result.message)
|
||||
}
|
||||
} catch (err) {
|
||||
const errorMessage = err instanceof Error ? err.message : 'Failed to start stream'
|
||||
setError(errorMessage)
|
||||
onError?.(errorMessage)
|
||||
} finally {
|
||||
setLoading(false)
|
||||
}
|
||||
}
|
||||
|
||||
const stopStreaming = async () => {
|
||||
try {
|
||||
if (streaming) {
|
||||
await visionApi.stopStream(cameraName)
|
||||
setStreaming(false)
|
||||
streamUrlRef.current = null
|
||||
|
||||
// Clear the image source
|
||||
if (imgRef.current) {
|
||||
imgRef.current.src = ''
|
||||
}
|
||||
}
|
||||
} catch (err) {
|
||||
console.error('Error stopping stream:', err)
|
||||
// Don't show error to user for stop stream failures
|
||||
}
|
||||
}
|
||||
|
||||
const handleClose = () => {
|
||||
stopStreaming()
|
||||
onClose()
|
||||
}
|
||||
|
||||
const handleImageError = () => {
|
||||
setError('Failed to load camera stream')
|
||||
}
|
||||
|
||||
const handleImageLoad = () => {
|
||||
setError(null)
|
||||
}
|
||||
|
||||
if (!isOpen) return null
|
||||
|
||||
return (
|
||||
<div className="fixed inset-0 bg-gray-600 bg-opacity-50 overflow-y-auto h-full w-full z-50">
|
||||
<div className="relative top-20 mx-auto p-5 border w-11/12 max-w-4xl shadow-lg rounded-md bg-white">
|
||||
<div className="mt-3">
|
||||
{/* Header */}
|
||||
<div className="flex items-center justify-between mb-4">
|
||||
<h3 className="text-lg font-medium text-gray-900">
|
||||
Camera Preview: {cameraName}
|
||||
</h3>
|
||||
<button
|
||||
onClick={handleClose}
|
||||
className="text-gray-400 hover:text-gray-600 focus:outline-none"
|
||||
>
|
||||
<svg className="w-6 h-6" fill="none" stroke="currentColor" viewBox="0 0 24 24">
|
||||
<path strokeLinecap="round" strokeLinejoin="round" strokeWidth={2} d="M6 18L18 6M6 6l12 12" />
|
||||
</svg>
|
||||
</button>
|
||||
</div>
|
||||
|
||||
{/* Content */}
|
||||
<div className="mb-4">
|
||||
{loading && (
|
||||
<div className="flex items-center justify-center h-64 bg-gray-100 rounded-lg">
|
||||
<div className="text-center">
|
||||
<div className="animate-spin rounded-full h-12 w-12 border-b-2 border-indigo-600 mx-auto"></div>
|
||||
<p className="mt-4 text-gray-600">Starting camera stream...</p>
|
||||
</div>
|
||||
</div>
|
||||
)}
|
||||
|
||||
{error && (
|
||||
<div className="bg-red-50 border border-red-200 rounded-md p-4">
|
||||
<div className="flex">
|
||||
<div className="flex-shrink-0">
|
||||
<svg className="h-5 w-5 text-red-400" viewBox="0 0 20 20" fill="currentColor">
|
||||
<path fillRule="evenodd" d="M10 18a8 8 0 100-16 8 8 0 000 16zM8.707 7.293a1 1 0 00-1.414 1.414L8.586 10l-1.293 1.293a1 1 0 101.414 1.414L10 11.414l1.293 1.293a1 1 0 001.414-1.414L11.414 10l1.293-1.293a1 1 0 00-1.414-1.414L10 8.586 8.707 7.293z" clipRule="evenodd" />
|
||||
</svg>
|
||||
</div>
|
||||
<div className="ml-3">
|
||||
<h3 className="text-sm font-medium text-red-800">Stream Error</h3>
|
||||
<div className="mt-2 text-sm text-red-700">
|
||||
<p>{error}</p>
|
||||
</div>
|
||||
<div className="mt-4">
|
||||
<button
|
||||
onClick={startStreaming}
|
||||
className="bg-red-600 text-white px-4 py-2 rounded-md hover:bg-red-700 focus:outline-none focus:ring-2 focus:ring-red-500 focus:ring-offset-2"
|
||||
>
|
||||
Retry
|
||||
</button>
|
||||
</div>
|
||||
</div>
|
||||
</div>
|
||||
</div>
|
||||
)}
|
||||
|
||||
{streaming && !loading && !error && (
|
||||
<div className="bg-black rounded-lg overflow-hidden">
|
||||
<img
|
||||
ref={imgRef}
|
||||
alt={`Live stream from ${cameraName}`}
|
||||
className="w-full h-auto max-h-96 object-contain"
|
||||
onError={handleImageError}
|
||||
onLoad={handleImageLoad}
|
||||
/>
|
||||
</div>
|
||||
)}
|
||||
</div>
|
||||
|
||||
{/* Footer */}
|
||||
<div className="flex items-center justify-between">
|
||||
<div className="flex items-center space-x-2">
|
||||
{streaming && (
|
||||
<div className="flex items-center text-green-600">
|
||||
<div className="w-2 h-2 bg-green-500 rounded-full mr-2 animate-pulse"></div>
|
||||
<span className="text-sm font-medium">Live Stream Active</span>
|
||||
</div>
|
||||
)}
|
||||
</div>
|
||||
<div className="flex space-x-3">
|
||||
<button
|
||||
onClick={handleClose}
|
||||
className="px-4 py-2 text-sm font-medium text-gray-700 bg-gray-100 border border-gray-300 rounded-md hover:bg-gray-200 focus:outline-none focus:ring-2 focus:ring-gray-500 focus:ring-offset-2"
|
||||
>
|
||||
Close
|
||||
</button>
|
||||
</div>
|
||||
</div>
|
||||
</div>
|
||||
</div>
|
||||
</div>
|
||||
)
|
||||
}
|
||||
@@ -13,6 +13,8 @@ import {
|
||||
formatDuration,
|
||||
formatUptime
|
||||
} from '../lib/visionApi'
|
||||
import { useAuth } from '../hooks/useAuth'
|
||||
import { CameraConfigModal } from './CameraConfigModal'
|
||||
|
||||
// Memoized components to prevent unnecessary re-renders
|
||||
const SystemOverview = memo(({ systemStatus }: { systemStatus: SystemStatus }) => (
|
||||
@@ -160,130 +162,207 @@ const StorageOverview = memo(({ storageStats }: { storageStats: StorageStats })
|
||||
</div>
|
||||
))
|
||||
|
||||
const CamerasStatus = memo(({ systemStatus }: { systemStatus: SystemStatus }) => (
|
||||
<div className="bg-white shadow rounded-lg">
|
||||
<div className="px-4 py-5 sm:px-6">
|
||||
<h3 className="text-lg leading-6 font-medium text-gray-900">Cameras</h3>
|
||||
<p className="mt-1 max-w-2xl text-sm text-gray-500">
|
||||
Current status of all cameras in the system
|
||||
</p>
|
||||
</div>
|
||||
<div className="border-t border-gray-200">
|
||||
<div className="grid grid-cols-1 md:grid-cols-2 gap-4 p-6">
|
||||
{Object.entries(systemStatus.cameras).map(([cameraName, camera]) => {
|
||||
const friendlyName = camera.device_info?.friendly_name
|
||||
const hasDeviceInfo = !!camera.device_info
|
||||
const hasSerial = !!camera.device_info?.serial_number
|
||||
const CamerasStatus = memo(({
|
||||
systemStatus,
|
||||
onConfigureCamera,
|
||||
onStartRecording,
|
||||
onStopRecording,
|
||||
onPreviewCamera
|
||||
}: {
|
||||
systemStatus: SystemStatus,
|
||||
onConfigureCamera: (cameraName: string) => void,
|
||||
onStartRecording: (cameraName: string) => Promise<void>,
|
||||
onStopRecording: (cameraName: string) => Promise<void>,
|
||||
onPreviewCamera: (cameraName: string) => void
|
||||
}) => {
|
||||
const { isAdmin } = useAuth()
|
||||
|
||||
// Determine if camera is connected based on status
|
||||
const isConnected = camera.status === 'available' || camera.status === 'connected'
|
||||
const hasError = camera.status === 'error'
|
||||
const statusText = camera.status || 'unknown'
|
||||
return (
|
||||
<div className="bg-white shadow rounded-lg">
|
||||
<div className="px-4 py-5 sm:px-6">
|
||||
<h3 className="text-lg leading-6 font-medium text-gray-900">Cameras</h3>
|
||||
<p className="mt-1 max-w-2xl text-sm text-gray-500">
|
||||
Current status of all cameras in the system
|
||||
</p>
|
||||
</div>
|
||||
<div className="border-t border-gray-200">
|
||||
<div className="grid grid-cols-1 md:grid-cols-2 gap-4 p-6">
|
||||
{Object.entries(systemStatus.cameras).map(([cameraName, camera]) => {
|
||||
const friendlyName = camera.device_info?.friendly_name
|
||||
const hasDeviceInfo = !!camera.device_info
|
||||
const hasSerial = !!camera.device_info?.serial_number
|
||||
|
||||
return (
|
||||
<div key={cameraName} className="border border-gray-200 rounded-lg p-4">
|
||||
<div className="flex items-center justify-between mb-3">
|
||||
<h4 className="text-lg font-medium text-gray-900">
|
||||
{friendlyName || cameraName}
|
||||
{friendlyName && (
|
||||
<span className="text-gray-500 text-sm font-normal ml-2">({cameraName})</span>
|
||||
)}
|
||||
</h4>
|
||||
<div className={`inline-flex items-center px-2.5 py-0.5 rounded-full text-xs font-medium ${isConnected ? 'bg-green-100 text-green-800' :
|
||||
hasError ? 'bg-yellow-100 text-yellow-800' :
|
||||
'bg-red-100 text-red-800'
|
||||
}`}>
|
||||
{isConnected ? 'Connected' : hasError ? 'Error' : 'Disconnected'}
|
||||
</div>
|
||||
</div>
|
||||
// Determine if camera is connected based on status
|
||||
const isConnected = camera.status === 'available' || camera.status === 'connected'
|
||||
const hasError = camera.status === 'error'
|
||||
const statusText = camera.status || 'unknown'
|
||||
|
||||
<div className="space-y-2 text-sm">
|
||||
<div className="flex justify-between">
|
||||
<span className="text-gray-500">Status:</span>
|
||||
<span className={`font-medium ${isConnected ? 'text-green-600' :
|
||||
hasError ? 'text-yellow-600' :
|
||||
'text-red-600'
|
||||
return (
|
||||
<div key={cameraName} className="border border-gray-200 rounded-lg p-4">
|
||||
<div className="flex items-center justify-between mb-3">
|
||||
<h4 className="text-lg font-medium text-gray-900">
|
||||
{friendlyName || cameraName}
|
||||
{friendlyName && (
|
||||
<span className="text-gray-500 text-sm font-normal ml-2">({cameraName})</span>
|
||||
)}
|
||||
</h4>
|
||||
<div className={`inline-flex items-center px-2.5 py-0.5 rounded-full text-xs font-medium ${isConnected ? 'bg-green-100 text-green-800' :
|
||||
hasError ? 'bg-yellow-100 text-yellow-800' :
|
||||
'bg-red-100 text-red-800'
|
||||
}`}>
|
||||
{statusText.charAt(0).toUpperCase() + statusText.slice(1)}
|
||||
</span>
|
||||
{isConnected ? 'Connected' : hasError ? 'Error' : 'Disconnected'}
|
||||
</div>
|
||||
</div>
|
||||
|
||||
{camera.is_recording && (
|
||||
<div className="space-y-2 text-sm">
|
||||
<div className="flex justify-between">
|
||||
<span className="text-gray-500">Recording:</span>
|
||||
<span className="text-red-600 font-medium flex items-center">
|
||||
<div className="w-2 h-2 bg-red-500 rounded-full mr-2 animate-pulse"></div>
|
||||
Active
|
||||
<span className="text-gray-500">Status:</span>
|
||||
<span className={`font-medium ${isConnected ? 'text-green-600' :
|
||||
hasError ? 'text-yellow-600' :
|
||||
'text-red-600'
|
||||
}`}>
|
||||
{statusText.charAt(0).toUpperCase() + statusText.slice(1)}
|
||||
</span>
|
||||
</div>
|
||||
)}
|
||||
|
||||
{hasDeviceInfo && (
|
||||
<>
|
||||
{camera.device_info.model && (
|
||||
<div className="flex justify-between">
|
||||
<span className="text-gray-500">Model:</span>
|
||||
<span className="text-gray-900">{camera.device_info.model}</span>
|
||||
</div>
|
||||
)}
|
||||
{hasSerial && (
|
||||
<div className="flex justify-between">
|
||||
<span className="text-gray-500">Serial:</span>
|
||||
<span className="text-gray-900 font-mono text-xs">{camera.device_info.serial_number}</span>
|
||||
</div>
|
||||
)}
|
||||
{camera.device_info.firmware_version && (
|
||||
<div className="flex justify-between">
|
||||
<span className="text-gray-500">Firmware:</span>
|
||||
<span className="text-gray-900 font-mono text-xs">{camera.device_info.firmware_version}</span>
|
||||
</div>
|
||||
)}
|
||||
</>
|
||||
)}
|
||||
|
||||
{camera.last_frame_time && (
|
||||
<div className="flex justify-between">
|
||||
<span className="text-gray-500">Last Frame:</span>
|
||||
<span className="text-gray-900">{new Date(camera.last_frame_time).toLocaleTimeString()}</span>
|
||||
</div>
|
||||
)}
|
||||
|
||||
{camera.frame_rate && (
|
||||
<div className="flex justify-between">
|
||||
<span className="text-gray-500">Frame Rate:</span>
|
||||
<span className="text-gray-900">{camera.frame_rate.toFixed(1)} fps</span>
|
||||
</div>
|
||||
)}
|
||||
|
||||
{camera.last_checked && (
|
||||
<div className="flex justify-between">
|
||||
<span className="text-gray-500">Last Checked:</span>
|
||||
<span className="text-gray-900">{new Date(camera.last_checked).toLocaleTimeString()}</span>
|
||||
</div>
|
||||
)}
|
||||
|
||||
{camera.current_recording_file && (
|
||||
<div className="flex justify-between">
|
||||
<span className="text-gray-500">Recording File:</span>
|
||||
<span className="text-gray-900 truncate ml-2">{camera.current_recording_file}</span>
|
||||
</div>
|
||||
)}
|
||||
|
||||
{camera.last_error && (
|
||||
<div className="mt-2 p-2 bg-red-50 border border-red-200 rounded">
|
||||
<div className="text-red-800 text-xs">
|
||||
<strong>Error:</strong> {camera.last_error}
|
||||
{camera.is_recording && (
|
||||
<div className="flex justify-between">
|
||||
<span className="text-gray-500">Recording:</span>
|
||||
<span className="text-red-600 font-medium flex items-center">
|
||||
<div className="w-2 h-2 bg-red-500 rounded-full mr-2 animate-pulse"></div>
|
||||
Active
|
||||
</span>
|
||||
</div>
|
||||
)}
|
||||
|
||||
{hasDeviceInfo && (
|
||||
<>
|
||||
{camera.device_info.model && (
|
||||
<div className="flex justify-between">
|
||||
<span className="text-gray-500">Model:</span>
|
||||
<span className="text-gray-900">{camera.device_info.model}</span>
|
||||
</div>
|
||||
)}
|
||||
{hasSerial && (
|
||||
<div className="flex justify-between">
|
||||
<span className="text-gray-500">Serial:</span>
|
||||
<span className="text-gray-900 font-mono text-xs">{camera.device_info.serial_number}</span>
|
||||
</div>
|
||||
)}
|
||||
{camera.device_info.firmware_version && (
|
||||
<div className="flex justify-between">
|
||||
<span className="text-gray-500">Firmware:</span>
|
||||
<span className="text-gray-900 font-mono text-xs">{camera.device_info.firmware_version}</span>
|
||||
</div>
|
||||
)}
|
||||
</>
|
||||
)}
|
||||
|
||||
{camera.last_frame_time && (
|
||||
<div className="flex justify-between">
|
||||
<span className="text-gray-500">Last Frame:</span>
|
||||
<span className="text-gray-900">{new Date(camera.last_frame_time).toLocaleTimeString()}</span>
|
||||
</div>
|
||||
)}
|
||||
|
||||
{camera.frame_rate && (
|
||||
<div className="flex justify-between">
|
||||
<span className="text-gray-500">Frame Rate:</span>
|
||||
<span className="text-gray-900">{camera.frame_rate.toFixed(1)} fps</span>
|
||||
</div>
|
||||
)}
|
||||
|
||||
{camera.last_checked && (
|
||||
<div className="flex justify-between">
|
||||
<span className="text-gray-500">Last Checked:</span>
|
||||
<span className="text-gray-900">{new Date(camera.last_checked).toLocaleTimeString()}</span>
|
||||
</div>
|
||||
)}
|
||||
|
||||
{camera.current_recording_file && (
|
||||
<div className="flex justify-between">
|
||||
<span className="text-gray-500">Recording File:</span>
|
||||
<span className="text-gray-900 truncate ml-2">{camera.current_recording_file}</span>
|
||||
</div>
|
||||
)}
|
||||
|
||||
{camera.last_error && (
|
||||
<div className="mt-2 p-2 bg-red-50 border border-red-200 rounded">
|
||||
<div className="text-red-800 text-xs">
|
||||
<strong>Error:</strong> {camera.last_error}
|
||||
</div>
|
||||
</div>
|
||||
)}
|
||||
|
||||
{/* Camera Control Buttons */}
|
||||
<div className="mt-3 pt-3 border-t border-gray-200 space-y-2">
|
||||
{/* Recording Controls */}
|
||||
<div className="flex space-x-2">
|
||||
{!camera.is_recording ? (
|
||||
<button
|
||||
onClick={() => onStartRecording(cameraName)}
|
||||
disabled={!isConnected}
|
||||
className={`flex-1 px-3 py-2 text-sm font-medium rounded-md focus:outline-none focus:ring-2 focus:ring-offset-2 ${isConnected
|
||||
? 'text-green-600 bg-green-50 border border-green-200 hover:bg-green-100 focus:ring-green-500'
|
||||
: 'text-gray-400 bg-gray-50 border border-gray-200 cursor-not-allowed'
|
||||
}`}
|
||||
>
|
||||
<svg className="w-4 h-4 inline-block mr-2" fill="none" stroke="currentColor" viewBox="0 0 24 24">
|
||||
<path strokeLinecap="round" strokeLinejoin="round" strokeWidth={2} d="M14.828 14.828a4 4 0 01-5.656 0M9 10h1m4 0h1m-6 4h8m-9-4h.01M21 12a9 9 0 11-18 0 9 9 0 0118 0z" />
|
||||
</svg>
|
||||
Start Recording
|
||||
</button>
|
||||
) : (
|
||||
<button
|
||||
onClick={() => onStopRecording(cameraName)}
|
||||
className="flex-1 px-3 py-2 text-sm font-medium text-red-600 bg-red-50 border border-red-200 rounded-md hover:bg-red-100 focus:outline-none focus:ring-2 focus:ring-red-500 focus:ring-offset-2"
|
||||
>
|
||||
<svg className="w-4 h-4 inline-block mr-2" fill="none" stroke="currentColor" viewBox="0 0 24 24">
|
||||
<path strokeLinecap="round" strokeLinejoin="round" strokeWidth={2} d="M21 12a9 9 0 11-18 0 9 9 0 0118 0z" />
|
||||
<path strokeLinecap="round" strokeLinejoin="round" strokeWidth={2} d="M9 9h6v6H9z" />
|
||||
</svg>
|
||||
Stop Recording
|
||||
</button>
|
||||
)}
|
||||
<button
|
||||
onClick={() => onPreviewCamera(cameraName)}
|
||||
disabled={!isConnected}
|
||||
className={`px-3 py-2 text-sm font-medium rounded-md focus:outline-none focus:ring-2 focus:ring-offset-2 ${isConnected
|
||||
? 'text-blue-600 bg-blue-50 border border-blue-200 hover:bg-blue-100 focus:ring-blue-500'
|
||||
: 'text-gray-400 bg-gray-50 border border-gray-200 cursor-not-allowed'
|
||||
}`}
|
||||
>
|
||||
<svg className="w-4 h-4 inline-block mr-2" fill="none" stroke="currentColor" viewBox="0 0 24 24">
|
||||
<path strokeLinecap="round" strokeLinejoin="round" strokeWidth={2} d="M15 12a3 3 0 11-6 0 3 3 0 016 0z" />
|
||||
<path strokeLinecap="round" strokeLinejoin="round" strokeWidth={2} d="M2.458 12C3.732 7.943 7.523 5 12 5c4.478 0 8.268 2.943 9.542 7-1.274 4.057-5.064 7-9.542 7-4.477 0-8.268-2.943-9.542-7z" />
|
||||
</svg>
|
||||
Preview
|
||||
</button>
|
||||
</div>
|
||||
|
||||
{/* Admin Configuration Button */}
|
||||
{isAdmin() && (
|
||||
<button
|
||||
onClick={() => onConfigureCamera(cameraName)}
|
||||
className="w-full px-3 py-2 text-sm font-medium text-indigo-600 bg-indigo-50 border border-indigo-200 rounded-md hover:bg-indigo-100 focus:outline-none focus:ring-2 focus:ring-indigo-500 focus:ring-offset-2"
|
||||
>
|
||||
<svg className="w-4 h-4 inline-block mr-2" fill="none" stroke="currentColor" viewBox="0 0 24 24">
|
||||
<path strokeLinecap="round" strokeLinejoin="round" strokeWidth={2} d="M10.325 4.317c.426-1.756 2.924-1.756 3.35 0a1.724 1.724 0 002.573 1.066c1.543-.94 3.31.826 2.37 2.37a1.724 1.724 0 001.065 2.572c1.756.426 1.756 2.924 0 3.35a1.724 1.724 0 00-1.066 2.573c.94 1.543-.826 3.31-2.37 2.37a1.724 1.724 0 00-2.572 1.065c-.426 1.756-2.924 1.756-3.35 0a1.724 1.724 0 00-2.573-1.066c-1.543.94-3.31-.826-2.37-2.37a1.724 1.724 0 00-1.065-2.572c-1.756-.426-1.756-2.924 0-3.35a1.724 1.724 0 001.066-2.573c-.94-1.543.826-3.31 2.37-2.37.996.608 2.296.07 2.572-1.065z" />
|
||||
<path strokeLinecap="round" strokeLinejoin="round" strokeWidth={2} d="M15 12a3 3 0 11-6 0 3 3 0 016 0z" />
|
||||
</svg>
|
||||
Configure Camera
|
||||
</button>
|
||||
)}
|
||||
</div>
|
||||
)}
|
||||
</div>
|
||||
</div>
|
||||
</div>
|
||||
)
|
||||
})}
|
||||
)
|
||||
})}
|
||||
</div>
|
||||
</div>
|
||||
</div>
|
||||
</div>
|
||||
))
|
||||
)
|
||||
})
|
||||
|
||||
const RecentRecordings = memo(({ recordings, systemStatus }: { recordings: Record<string, RecordingInfo>, systemStatus: SystemStatus | null }) => (
|
||||
<div className="bg-white shadow rounded-lg">
|
||||
@@ -374,6 +453,11 @@ export function VisionSystem() {
|
||||
const [mqttStatus, setMqttStatus] = useState<MqttStatus | null>(null)
|
||||
const [mqttEvents, setMqttEvents] = useState<MqttEvent[]>([])
|
||||
|
||||
// Camera configuration modal state
|
||||
const [configModalOpen, setConfigModalOpen] = useState(false)
|
||||
const [selectedCamera, setSelectedCamera] = useState<string | null>(null)
|
||||
const [notification, setNotification] = useState<{ type: 'success' | 'error', message: string } | null>(null)
|
||||
|
||||
const intervalRef = useRef<NodeJS.Timeout | null>(null)
|
||||
|
||||
const clearAutoRefresh = useCallback(() => {
|
||||
@@ -486,6 +570,22 @@ export function VisionSystem() {
|
||||
}
|
||||
}, [systemStatus])
|
||||
|
||||
// Camera configuration handlers
|
||||
const handleConfigureCamera = (cameraName: string) => {
|
||||
setSelectedCamera(cameraName)
|
||||
setConfigModalOpen(true)
|
||||
}
|
||||
|
||||
const handleConfigSuccess = (message: string) => {
|
||||
setNotification({ type: 'success', message })
|
||||
setTimeout(() => setNotification(null), 5000)
|
||||
}
|
||||
|
||||
const handleConfigError = (message: string) => {
|
||||
setNotification({ type: 'error', message })
|
||||
setTimeout(() => setNotification(null), 5000)
|
||||
}
|
||||
|
||||
const getStatusColor = (status: string, isRecording: boolean = false) => {
|
||||
// If camera is recording, always show red regardless of status
|
||||
if (isRecording) {
|
||||
@@ -641,7 +741,7 @@ export function VisionSystem() {
|
||||
|
||||
|
||||
{/* Cameras Status */}
|
||||
{systemStatus && <CamerasStatus systemStatus={systemStatus} />}
|
||||
{systemStatus && <CamerasStatus systemStatus={systemStatus} onConfigureCamera={handleConfigureCamera} />}
|
||||
|
||||
{/* Machines Status */}
|
||||
{systemStatus && Object.keys(systemStatus.machines).length > 0 && (
|
||||
@@ -697,6 +797,58 @@ export function VisionSystem() {
|
||||
|
||||
{/* Recent Recordings */}
|
||||
{Object.keys(recordings).length > 0 && <RecentRecordings recordings={recordings} systemStatus={systemStatus} />}
|
||||
|
||||
{/* Camera Configuration Modal */}
|
||||
{selectedCamera && (
|
||||
<CameraConfigModal
|
||||
cameraName={selectedCamera}
|
||||
isOpen={configModalOpen}
|
||||
onClose={() => {
|
||||
setConfigModalOpen(false)
|
||||
setSelectedCamera(null)
|
||||
}}
|
||||
onSuccess={handleConfigSuccess}
|
||||
onError={handleConfigError}
|
||||
/>
|
||||
)}
|
||||
|
||||
{/* Notification */}
|
||||
{notification && (
|
||||
<div className={`fixed top-4 right-4 z-50 p-4 rounded-md shadow-lg ${notification.type === 'success'
|
||||
? 'bg-green-50 border border-green-200 text-green-800'
|
||||
: 'bg-red-50 border border-red-200 text-red-800'
|
||||
}`}>
|
||||
<div className="flex items-center">
|
||||
<div className="flex-shrink-0">
|
||||
{notification.type === 'success' ? (
|
||||
<svg className="h-5 w-5 text-green-400" viewBox="0 0 20 20" fill="currentColor">
|
||||
<path fillRule="evenodd" d="M10 18a8 8 0 100-16 8 8 0 000 16zm3.707-9.293a1 1 0 00-1.414-1.414L9 10.586 7.707 9.293a1 1 0 00-1.414 1.414l2 2a1 1 0 001.414 0l4-4z" clipRule="evenodd" />
|
||||
</svg>
|
||||
) : (
|
||||
<svg className="h-5 w-5 text-red-400" viewBox="0 0 20 20" fill="currentColor">
|
||||
<path fillRule="evenodd" d="M10 18a8 8 0 100-16 8 8 0 000 16zM8.707 7.293a1 1 0 00-1.414 1.414L8.586 10l-1.293 1.293a1 1 0 101.414 1.414L10 11.414l1.293 1.293a1 1 0 001.414-1.414L11.414 10l1.293-1.293a1 1 0 00-1.414-1.414L10 8.586 8.707 7.293z" clipRule="evenodd" />
|
||||
</svg>
|
||||
)}
|
||||
</div>
|
||||
<div className="ml-3">
|
||||
<p className="text-sm font-medium">{notification.message}</p>
|
||||
</div>
|
||||
<div className="ml-auto pl-3">
|
||||
<button
|
||||
onClick={() => setNotification(null)}
|
||||
className={`inline-flex rounded-md p-1.5 focus:outline-none focus:ring-2 focus:ring-offset-2 ${notification.type === 'success'
|
||||
? 'text-green-500 hover:bg-green-100 focus:ring-green-600'
|
||||
: 'text-red-500 hover:bg-red-100 focus:ring-red-600'
|
||||
}`}
|
||||
>
|
||||
<svg className="h-4 w-4" viewBox="0 0 20 20" fill="currentColor">
|
||||
<path fillRule="evenodd" d="M4.293 4.293a1 1 0 011.414 0L10 8.586l4.293-4.293a1 1 0 111.414 1.414L11.414 10l4.293 4.293a1 1 0 01-1.414 1.414L10 11.414l-4.293 4.293a1 1 0 01-1.414-1.414L8.586 10 4.293 5.707a1 1 0 010-1.414z" clipRule="evenodd" />
|
||||
</svg>
|
||||
</button>
|
||||
</div>
|
||||
</div>
|
||||
</div>
|
||||
)}
|
||||
</div>
|
||||
)
|
||||
}
|
||||
|
||||
48
src/hooks/useAuth.ts
Normal file
48
src/hooks/useAuth.ts
Normal file
@@ -0,0 +1,48 @@
|
||||
import { useState, useEffect } from 'react'
|
||||
import { userManagement, type User } from '../lib/supabase'
|
||||
|
||||
export function useAuth() {
|
||||
const [user, setUser] = useState<User | null>(null)
|
||||
const [loading, setLoading] = useState(true)
|
||||
const [error, setError] = useState<string | null>(null)
|
||||
|
||||
useEffect(() => {
|
||||
loadUser()
|
||||
}, [])
|
||||
|
||||
const loadUser = async () => {
|
||||
try {
|
||||
setLoading(true)
|
||||
setError(null)
|
||||
const currentUser = await userManagement.getCurrentUser()
|
||||
setUser(currentUser)
|
||||
} catch (err) {
|
||||
setError(err instanceof Error ? err.message : 'Failed to load user')
|
||||
setUser(null)
|
||||
} finally {
|
||||
setLoading(false)
|
||||
}
|
||||
}
|
||||
|
||||
const isAdmin = () => {
|
||||
return user?.roles.includes('admin') ?? false
|
||||
}
|
||||
|
||||
const hasRole = (role: string) => {
|
||||
return user?.roles.includes(role as any) ?? false
|
||||
}
|
||||
|
||||
const hasAnyRole = (roles: string[]) => {
|
||||
return roles.some(role => user?.roles.includes(role as any)) ?? false
|
||||
}
|
||||
|
||||
return {
|
||||
user,
|
||||
loading,
|
||||
error,
|
||||
isAdmin,
|
||||
hasRole,
|
||||
hasAnyRole,
|
||||
refreshUser: loadUser
|
||||
}
|
||||
}
|
||||
81
src/hooks/useAutoRecording.ts
Normal file
81
src/hooks/useAutoRecording.ts
Normal file
@@ -0,0 +1,81 @@
|
||||
/**
|
||||
* React hook for managing auto-recording functionality
|
||||
*/
|
||||
|
||||
import { useState, useEffect, useCallback } from 'react'
|
||||
import { autoRecordingManager, type AutoRecordingState } from '../lib/autoRecordingManager'
|
||||
|
||||
export interface UseAutoRecordingResult {
|
||||
isRunning: boolean
|
||||
states: AutoRecordingState[]
|
||||
error: string | null
|
||||
start: () => Promise<void>
|
||||
stop: () => void
|
||||
refresh: () => Promise<void>
|
||||
}
|
||||
|
||||
export function useAutoRecording(): UseAutoRecordingResult {
|
||||
const [isRunning, setIsRunning] = useState(false)
|
||||
const [states, setStates] = useState<AutoRecordingState[]>([])
|
||||
const [error, setError] = useState<string | null>(null)
|
||||
|
||||
// Update states periodically
|
||||
useEffect(() => {
|
||||
if (!isRunning) {
|
||||
return
|
||||
}
|
||||
|
||||
const interval = setInterval(() => {
|
||||
setStates(autoRecordingManager.getStates())
|
||||
}, 1000)
|
||||
|
||||
return () => clearInterval(interval)
|
||||
}, [isRunning])
|
||||
|
||||
const start = useCallback(async () => {
|
||||
try {
|
||||
setError(null)
|
||||
await autoRecordingManager.start()
|
||||
setIsRunning(true)
|
||||
setStates(autoRecordingManager.getStates())
|
||||
} catch (err) {
|
||||
const errorMessage = err instanceof Error ? err.message : 'Failed to start auto-recording'
|
||||
setError(errorMessage)
|
||||
console.error('Failed to start auto-recording:', err)
|
||||
}
|
||||
}, [])
|
||||
|
||||
const stop = useCallback(() => {
|
||||
try {
|
||||
autoRecordingManager.stop()
|
||||
setIsRunning(false)
|
||||
setStates([])
|
||||
setError(null)
|
||||
} catch (err) {
|
||||
const errorMessage = err instanceof Error ? err.message : 'Failed to stop auto-recording'
|
||||
setError(errorMessage)
|
||||
console.error('Failed to stop auto-recording:', err)
|
||||
}
|
||||
}, [])
|
||||
|
||||
const refresh = useCallback(async () => {
|
||||
try {
|
||||
setError(null)
|
||||
await autoRecordingManager.refreshConfigurations()
|
||||
setStates(autoRecordingManager.getStates())
|
||||
} catch (err) {
|
||||
const errorMessage = err instanceof Error ? err.message : 'Failed to refresh configurations'
|
||||
setError(errorMessage)
|
||||
console.error('Failed to refresh auto-recording configurations:', err)
|
||||
}
|
||||
}, [])
|
||||
|
||||
return {
|
||||
isRunning,
|
||||
states,
|
||||
error,
|
||||
start,
|
||||
stop,
|
||||
refresh
|
||||
}
|
||||
}
|
||||
286
src/lib/autoRecordingManager.ts
Normal file
286
src/lib/autoRecordingManager.ts
Normal file
@@ -0,0 +1,286 @@
|
||||
/**
|
||||
* Auto-Recording Manager
|
||||
*
|
||||
* This module handles automatic recording start/stop based on MQTT machine state changes.
|
||||
* It monitors MQTT events and triggers camera recording when machines turn on/off.
|
||||
*/
|
||||
|
||||
import { visionApi, type MqttEvent, type CameraConfig } from './visionApi'
|
||||
|
||||
export interface AutoRecordingState {
|
||||
cameraName: string
|
||||
machineState: 'on' | 'off'
|
||||
isRecording: boolean
|
||||
autoRecordEnabled: boolean
|
||||
lastStateChange: Date
|
||||
}
|
||||
|
||||
export class AutoRecordingManager {
|
||||
private cameras: Map<string, AutoRecordingState> = new Map()
|
||||
private mqttPollingInterval: NodeJS.Timeout | null = null
|
||||
private lastProcessedEventNumber = 0
|
||||
private isRunning = false
|
||||
|
||||
constructor(private pollingIntervalMs: number = 2000) {}
|
||||
|
||||
/**
|
||||
* Start the auto-recording manager
|
||||
*/
|
||||
async start(): Promise<void> {
|
||||
if (this.isRunning) {
|
||||
console.warn('Auto-recording manager is already running')
|
||||
return
|
||||
}
|
||||
|
||||
console.log('Starting auto-recording manager...')
|
||||
this.isRunning = true
|
||||
|
||||
// Initialize camera configurations
|
||||
await this.initializeCameras()
|
||||
|
||||
// Start polling for MQTT events
|
||||
this.startMqttPolling()
|
||||
}
|
||||
|
||||
/**
|
||||
* Stop the auto-recording manager
|
||||
*/
|
||||
stop(): void {
|
||||
if (!this.isRunning) {
|
||||
return
|
||||
}
|
||||
|
||||
console.log('Stopping auto-recording manager...')
|
||||
this.isRunning = false
|
||||
|
||||
if (this.mqttPollingInterval) {
|
||||
clearInterval(this.mqttPollingInterval)
|
||||
this.mqttPollingInterval = null
|
||||
}
|
||||
|
||||
this.cameras.clear()
|
||||
}
|
||||
|
||||
/**
|
||||
* Initialize camera configurations and states
|
||||
*/
|
||||
private async initializeCameras(): Promise<void> {
|
||||
try {
|
||||
const cameras = await visionApi.getCameras()
|
||||
|
||||
for (const [cameraName, cameraStatus] of Object.entries(cameras)) {
|
||||
try {
|
||||
const config = await visionApi.getCameraConfig(cameraName)
|
||||
|
||||
this.cameras.set(cameraName, {
|
||||
cameraName,
|
||||
machineState: 'off', // Default to off
|
||||
isRecording: cameraStatus.is_recording,
|
||||
autoRecordEnabled: config.auto_record_on_machine_start,
|
||||
lastStateChange: new Date()
|
||||
})
|
||||
|
||||
console.log(`Initialized camera ${cameraName}: auto-record=${config.auto_record_on_machine_start}, machine=${config.machine_topic}`)
|
||||
} catch (error) {
|
||||
console.error(`Failed to initialize camera ${cameraName}:`, error)
|
||||
}
|
||||
}
|
||||
} catch (error) {
|
||||
console.error('Failed to initialize cameras:', error)
|
||||
}
|
||||
}
|
||||
|
||||
/**
|
||||
* Start polling for MQTT events
|
||||
*/
|
||||
private startMqttPolling(): void {
|
||||
this.mqttPollingInterval = setInterval(async () => {
|
||||
if (!this.isRunning) {
|
||||
return
|
||||
}
|
||||
|
||||
try {
|
||||
await this.processMqttEvents()
|
||||
} catch (error) {
|
||||
console.error('Error processing MQTT events:', error)
|
||||
}
|
||||
}, this.pollingIntervalMs)
|
||||
}
|
||||
|
||||
/**
|
||||
* Process new MQTT events and trigger recording actions
|
||||
*/
|
||||
private async processMqttEvents(): Promise<void> {
|
||||
try {
|
||||
const mqttResponse = await visionApi.getMqttEvents(50) // Get recent events
|
||||
|
||||
// Filter for new events we haven't processed yet
|
||||
const newEvents = mqttResponse.events.filter(
|
||||
event => event.message_number > this.lastProcessedEventNumber
|
||||
)
|
||||
|
||||
if (newEvents.length === 0) {
|
||||
return
|
||||
}
|
||||
|
||||
// Update last processed event number
|
||||
this.lastProcessedEventNumber = Math.max(
|
||||
...newEvents.map(event => event.message_number)
|
||||
)
|
||||
|
||||
// Process each new event
|
||||
for (const event of newEvents) {
|
||||
await this.handleMqttEvent(event)
|
||||
}
|
||||
} catch (error) {
|
||||
console.error('Failed to fetch MQTT events:', error)
|
||||
}
|
||||
}
|
||||
|
||||
/**
|
||||
* Handle a single MQTT event and trigger recording if needed
|
||||
*/
|
||||
private async handleMqttEvent(event: MqttEvent): Promise<void> {
|
||||
const { machine_name, normalized_state } = event
|
||||
|
||||
// Find cameras that are configured for this machine
|
||||
const affectedCameras = await this.getCamerasForMachine(machine_name)
|
||||
|
||||
for (const cameraName of affectedCameras) {
|
||||
const cameraState = this.cameras.get(cameraName)
|
||||
|
||||
if (!cameraState || !cameraState.autoRecordEnabled) {
|
||||
continue
|
||||
}
|
||||
|
||||
const newMachineState = normalized_state as 'on' | 'off'
|
||||
|
||||
// Skip if state hasn't changed
|
||||
if (cameraState.machineState === newMachineState) {
|
||||
continue
|
||||
}
|
||||
|
||||
console.log(`Machine ${machine_name} changed from ${cameraState.machineState} to ${newMachineState} - Camera: ${cameraName}`)
|
||||
|
||||
// Update camera state
|
||||
cameraState.machineState = newMachineState
|
||||
cameraState.lastStateChange = new Date()
|
||||
|
||||
// Trigger recording action
|
||||
if (newMachineState === 'on' && !cameraState.isRecording) {
|
||||
await this.startAutoRecording(cameraName, machine_name)
|
||||
} else if (newMachineState === 'off' && cameraState.isRecording) {
|
||||
await this.stopAutoRecording(cameraName, machine_name)
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
/**
|
||||
* Get cameras that are configured for a specific machine
|
||||
*/
|
||||
private async getCamerasForMachine(machineName: string): Promise<string[]> {
|
||||
const cameras: string[] = []
|
||||
|
||||
// Define the correct machine-to-camera mapping
|
||||
const machineToCamera: Record<string, string> = {
|
||||
'blower_separator': 'camera1', // camera1 is for blower separator
|
||||
'vibratory_conveyor': 'camera2' // camera2 is for conveyor
|
||||
}
|
||||
|
||||
const expectedCamera = machineToCamera[machineName]
|
||||
if (!expectedCamera) {
|
||||
console.warn(`No camera mapping found for machine: ${machineName}`)
|
||||
return cameras
|
||||
}
|
||||
|
||||
try {
|
||||
const allCameras = await visionApi.getCameras()
|
||||
|
||||
// Check if the expected camera exists and has auto-recording enabled
|
||||
if (allCameras[expectedCamera]) {
|
||||
try {
|
||||
const config = await visionApi.getCameraConfig(expectedCamera)
|
||||
|
||||
if (config.auto_record_on_machine_start) {
|
||||
cameras.push(expectedCamera)
|
||||
console.log(`Found camera ${expectedCamera} configured for machine ${machineName}`)
|
||||
} else {
|
||||
console.log(`Camera ${expectedCamera} exists but auto-recording is disabled`)
|
||||
}
|
||||
} catch (error) {
|
||||
console.error(`Failed to get config for camera ${expectedCamera}:`, error)
|
||||
}
|
||||
} else {
|
||||
console.warn(`Expected camera ${expectedCamera} not found for machine ${machineName}`)
|
||||
}
|
||||
} catch (error) {
|
||||
console.error('Failed to get cameras for machine:', error)
|
||||
}
|
||||
|
||||
return cameras
|
||||
}
|
||||
|
||||
/**
|
||||
* Start auto-recording for a camera
|
||||
*/
|
||||
private async startAutoRecording(cameraName: string, machineName: string): Promise<void> {
|
||||
try {
|
||||
const timestamp = new Date().toISOString().replace(/[:.]/g, '-')
|
||||
const filename = `auto_${machineName}_${timestamp}.avi`
|
||||
|
||||
const result = await visionApi.startRecording(cameraName, { filename })
|
||||
|
||||
if (result.success) {
|
||||
const cameraState = this.cameras.get(cameraName)
|
||||
if (cameraState) {
|
||||
cameraState.isRecording = true
|
||||
}
|
||||
|
||||
console.log(`✅ Auto-recording started for ${cameraName}: ${result.filename}`)
|
||||
} else {
|
||||
console.error(`❌ Failed to start auto-recording for ${cameraName}:`, result.message)
|
||||
}
|
||||
} catch (error) {
|
||||
console.error(`❌ Error starting auto-recording for ${cameraName}:`, error)
|
||||
}
|
||||
}
|
||||
|
||||
/**
|
||||
* Stop auto-recording for a camera
|
||||
*/
|
||||
private async stopAutoRecording(cameraName: string, machineName: string): Promise<void> {
|
||||
try {
|
||||
const result = await visionApi.stopRecording(cameraName)
|
||||
|
||||
if (result.success) {
|
||||
const cameraState = this.cameras.get(cameraName)
|
||||
if (cameraState) {
|
||||
cameraState.isRecording = false
|
||||
}
|
||||
|
||||
console.log(`⏹️ Auto-recording stopped for ${cameraName} (${result.duration_seconds}s)`)
|
||||
} else {
|
||||
console.error(`❌ Failed to stop auto-recording for ${cameraName}:`, result.message)
|
||||
}
|
||||
} catch (error) {
|
||||
console.error(`❌ Error stopping auto-recording for ${cameraName}:`, error)
|
||||
}
|
||||
}
|
||||
|
||||
/**
|
||||
* Get current auto-recording states for all cameras
|
||||
*/
|
||||
getStates(): AutoRecordingState[] {
|
||||
return Array.from(this.cameras.values())
|
||||
}
|
||||
|
||||
/**
|
||||
* Refresh camera configurations (call when configs are updated)
|
||||
*/
|
||||
async refreshConfigurations(): Promise<void> {
|
||||
await this.initializeCameras()
|
||||
}
|
||||
}
|
||||
|
||||
// Global instance
|
||||
export const autoRecordingManager = new AutoRecordingManager()
|
||||
@@ -40,6 +40,12 @@ export interface CameraStatus {
|
||||
recording_start_time?: string | null
|
||||
last_frame_time?: string
|
||||
frame_rate?: number
|
||||
// NEW AUTO-RECORDING FIELDS
|
||||
auto_recording_enabled: boolean
|
||||
auto_recording_active: boolean
|
||||
auto_recording_failure_count: number
|
||||
auto_recording_last_attempt?: string
|
||||
auto_recording_last_error?: string
|
||||
}
|
||||
|
||||
export interface RecordingInfo {
|
||||
@@ -96,6 +102,16 @@ export interface StopRecordingResponse {
|
||||
duration_seconds: number
|
||||
}
|
||||
|
||||
export interface StreamStartResponse {
|
||||
success: boolean
|
||||
message: string
|
||||
}
|
||||
|
||||
export interface StreamStopResponse {
|
||||
success: boolean
|
||||
message: string
|
||||
}
|
||||
|
||||
export interface CameraTestResponse {
|
||||
success: boolean
|
||||
message: string
|
||||
@@ -111,6 +127,83 @@ export interface CameraRecoveryResponse {
|
||||
timestamp: string
|
||||
}
|
||||
|
||||
// Auto-Recording Response Types
|
||||
export interface AutoRecordingConfigResponse {
|
||||
success: boolean
|
||||
message: string
|
||||
camera_name: string
|
||||
enabled: boolean
|
||||
}
|
||||
|
||||
export interface AutoRecordingStatusResponse {
|
||||
running: boolean
|
||||
auto_recording_enabled: boolean
|
||||
retry_queue: Record<string, any>
|
||||
enabled_cameras: string[]
|
||||
}
|
||||
|
||||
// Camera Configuration Types
|
||||
export interface CameraConfig {
|
||||
name: string
|
||||
machine_topic: string
|
||||
storage_path: string
|
||||
enabled: boolean
|
||||
auto_record_on_machine_start: boolean
|
||||
// NEW AUTO-RECORDING CONFIG FIELDS (optional for backward compatibility)
|
||||
auto_start_recording_enabled?: boolean
|
||||
auto_recording_max_retries?: number
|
||||
auto_recording_retry_delay_seconds?: number
|
||||
exposure_ms: number
|
||||
gain: number
|
||||
target_fps: number
|
||||
sharpness: number
|
||||
contrast: number
|
||||
saturation: number
|
||||
gamma: number
|
||||
noise_filter_enabled: boolean
|
||||
denoise_3d_enabled: boolean
|
||||
auto_white_balance: boolean
|
||||
color_temperature_preset: number
|
||||
anti_flicker_enabled: boolean
|
||||
light_frequency: number
|
||||
bit_depth: number
|
||||
hdr_enabled: boolean
|
||||
hdr_gain_mode: number
|
||||
}
|
||||
|
||||
export interface CameraConfigUpdate {
|
||||
auto_record_on_machine_start?: boolean
|
||||
auto_start_recording_enabled?: boolean
|
||||
auto_recording_max_retries?: number
|
||||
auto_recording_retry_delay_seconds?: number
|
||||
exposure_ms?: number
|
||||
gain?: number
|
||||
target_fps?: number
|
||||
sharpness?: number
|
||||
contrast?: number
|
||||
saturation?: number
|
||||
gamma?: number
|
||||
noise_filter_enabled?: boolean
|
||||
denoise_3d_enabled?: boolean
|
||||
auto_white_balance?: boolean
|
||||
color_temperature_preset?: number
|
||||
anti_flicker_enabled?: boolean
|
||||
light_frequency?: number
|
||||
hdr_enabled?: boolean
|
||||
hdr_gain_mode?: number
|
||||
}
|
||||
|
||||
export interface CameraConfigUpdateResponse {
|
||||
success: boolean
|
||||
message: string
|
||||
updated_settings: string[]
|
||||
}
|
||||
|
||||
export interface CameraConfigApplyResponse {
|
||||
success: boolean
|
||||
message: string
|
||||
}
|
||||
|
||||
export interface MqttMessage {
|
||||
timestamp: string
|
||||
topic: string
|
||||
@@ -239,6 +332,23 @@ class VisionApiClient {
|
||||
})
|
||||
}
|
||||
|
||||
// Streaming control
|
||||
async startStream(cameraName: string): Promise<StreamStartResponse> {
|
||||
return this.request(`/cameras/${cameraName}/start-stream`, {
|
||||
method: 'POST',
|
||||
})
|
||||
}
|
||||
|
||||
async stopStream(cameraName: string): Promise<StreamStopResponse> {
|
||||
return this.request(`/cameras/${cameraName}/stop-stream`, {
|
||||
method: 'POST',
|
||||
})
|
||||
}
|
||||
|
||||
getStreamUrl(cameraName: string): string {
|
||||
return `${this.baseUrl}/cameras/${cameraName}/stream`
|
||||
}
|
||||
|
||||
// Camera diagnostics
|
||||
async testCameraConnection(cameraName: string): Promise<CameraTestResponse> {
|
||||
return this.request(`/cameras/${cameraName}/test-connection`, {
|
||||
@@ -276,6 +386,84 @@ class VisionApiClient {
|
||||
})
|
||||
}
|
||||
|
||||
// Camera configuration
|
||||
async getCameraConfig(cameraName: string): Promise<CameraConfig> {
|
||||
try {
|
||||
const config = await this.request(`/cameras/${cameraName}/config`) as any
|
||||
|
||||
// Ensure auto-recording fields have default values if missing
|
||||
return {
|
||||
...config,
|
||||
auto_start_recording_enabled: config.auto_start_recording_enabled ?? false,
|
||||
auto_recording_max_retries: config.auto_recording_max_retries ?? 3,
|
||||
auto_recording_retry_delay_seconds: config.auto_recording_retry_delay_seconds ?? 5
|
||||
}
|
||||
} catch (error: any) {
|
||||
// If the error is related to missing auto-recording fields, try to handle it gracefully
|
||||
if (error.message?.includes('auto_start_recording_enabled') ||
|
||||
error.message?.includes('auto_recording_max_retries') ||
|
||||
error.message?.includes('auto_recording_retry_delay_seconds')) {
|
||||
|
||||
// Try to get the raw camera data and add default auto-recording fields
|
||||
try {
|
||||
const response = await fetch(`${this.baseUrl}/cameras/${cameraName}/config`, {
|
||||
headers: {
|
||||
'Content-Type': 'application/json',
|
||||
}
|
||||
})
|
||||
|
||||
if (!response.ok) {
|
||||
throw new Error(`HTTP ${response.status}: ${response.statusText}`)
|
||||
}
|
||||
|
||||
const rawConfig = await response.json()
|
||||
|
||||
// Add missing auto-recording fields with defaults
|
||||
return {
|
||||
...rawConfig,
|
||||
auto_start_recording_enabled: false,
|
||||
auto_recording_max_retries: 3,
|
||||
auto_recording_retry_delay_seconds: 5
|
||||
}
|
||||
} catch (fallbackError) {
|
||||
throw new Error(`Failed to load camera configuration: ${error.message}`)
|
||||
}
|
||||
}
|
||||
|
||||
throw error
|
||||
}
|
||||
}
|
||||
|
||||
async updateCameraConfig(cameraName: string, config: CameraConfigUpdate): Promise<CameraConfigUpdateResponse> {
|
||||
return this.request(`/cameras/${cameraName}/config`, {
|
||||
method: 'PUT',
|
||||
body: JSON.stringify(config),
|
||||
})
|
||||
}
|
||||
|
||||
async applyCameraConfig(cameraName: string): Promise<CameraConfigApplyResponse> {
|
||||
return this.request(`/cameras/${cameraName}/apply-config`, {
|
||||
method: 'POST',
|
||||
})
|
||||
}
|
||||
|
||||
// Auto-Recording endpoints
|
||||
async enableAutoRecording(cameraName: string): Promise<AutoRecordingConfigResponse> {
|
||||
return this.request(`/cameras/${cameraName}/auto-recording/enable`, {
|
||||
method: 'POST',
|
||||
})
|
||||
}
|
||||
|
||||
async disableAutoRecording(cameraName: string): Promise<AutoRecordingConfigResponse> {
|
||||
return this.request(`/cameras/${cameraName}/auto-recording/disable`, {
|
||||
method: 'POST',
|
||||
})
|
||||
}
|
||||
|
||||
async getAutoRecordingStatus(): Promise<AutoRecordingStatusResponse> {
|
||||
return this.request('/auto-recording/status')
|
||||
}
|
||||
|
||||
// Recording sessions
|
||||
async getRecordings(): Promise<Record<string, RecordingInfo>> {
|
||||
return this.request('/recordings')
|
||||
|
||||
132
test-api-fix.js
Normal file
132
test-api-fix.js
Normal file
@@ -0,0 +1,132 @@
|
||||
// Test script to verify the camera configuration API fix
|
||||
// This simulates the VisionApiClient.getCameraConfig method
|
||||
|
||||
class TestVisionApiClient {
|
||||
constructor() {
|
||||
this.baseUrl = 'http://vision:8000'
|
||||
}
|
||||
|
||||
async request(endpoint) {
|
||||
const response = await fetch(`${this.baseUrl}${endpoint}`, {
|
||||
headers: {
|
||||
'Content-Type': 'application/json',
|
||||
}
|
||||
})
|
||||
|
||||
if (!response.ok) {
|
||||
const errorText = await response.text()
|
||||
throw new Error(`HTTP ${response.status}: ${response.statusText}\n${errorText}`)
|
||||
}
|
||||
|
||||
return response.json()
|
||||
}
|
||||
|
||||
// This is our fixed getCameraConfig method
|
||||
async getCameraConfig(cameraName) {
|
||||
try {
|
||||
const config = await this.request(`/cameras/${cameraName}/config`)
|
||||
|
||||
// Ensure auto-recording fields have default values if missing
|
||||
return {
|
||||
...config,
|
||||
auto_start_recording_enabled: config.auto_start_recording_enabled ?? false,
|
||||
auto_recording_max_retries: config.auto_recording_max_retries ?? 3,
|
||||
auto_recording_retry_delay_seconds: config.auto_recording_retry_delay_seconds ?? 5
|
||||
}
|
||||
} catch (error) {
|
||||
// If the error is related to missing auto-recording fields, try to handle it gracefully
|
||||
if (error.message?.includes('auto_start_recording_enabled') ||
|
||||
error.message?.includes('auto_recording_max_retries') ||
|
||||
error.message?.includes('auto_recording_retry_delay_seconds')) {
|
||||
|
||||
// Try to get the raw camera data and add default auto-recording fields
|
||||
try {
|
||||
const response = await fetch(`${this.baseUrl}/cameras/${cameraName}/config`, {
|
||||
headers: {
|
||||
'Content-Type': 'application/json',
|
||||
}
|
||||
})
|
||||
|
||||
if (!response.ok) {
|
||||
throw new Error(`HTTP ${response.status}: ${response.statusText}`)
|
||||
}
|
||||
|
||||
const rawConfig = await response.json()
|
||||
|
||||
// Add missing auto-recording fields with defaults
|
||||
return {
|
||||
...rawConfig,
|
||||
auto_start_recording_enabled: false,
|
||||
auto_recording_max_retries: 3,
|
||||
auto_recording_retry_delay_seconds: 5
|
||||
}
|
||||
} catch (fallbackError) {
|
||||
throw new Error(`Failed to load camera configuration: ${error.message}`)
|
||||
}
|
||||
}
|
||||
|
||||
throw error
|
||||
}
|
||||
}
|
||||
|
||||
async getCameras() {
|
||||
return this.request('/cameras')
|
||||
}
|
||||
}
|
||||
|
||||
// Test function
|
||||
async function testCameraConfigFix() {
|
||||
console.log('🧪 Testing Camera Configuration API Fix')
|
||||
console.log('=' * 50)
|
||||
|
||||
const api = new TestVisionApiClient()
|
||||
|
||||
try {
|
||||
// First get available cameras
|
||||
console.log('📋 Getting camera list...')
|
||||
const cameras = await api.getCameras()
|
||||
const cameraNames = Object.keys(cameras)
|
||||
|
||||
if (cameraNames.length === 0) {
|
||||
console.log('❌ No cameras found')
|
||||
return
|
||||
}
|
||||
|
||||
console.log(`✅ Found ${cameraNames.length} cameras: ${cameraNames.join(', ')}`)
|
||||
|
||||
// Test configuration for each camera
|
||||
for (const cameraName of cameraNames) {
|
||||
console.log(`\n🎥 Testing configuration for ${cameraName}...`)
|
||||
|
||||
try {
|
||||
const config = await api.getCameraConfig(cameraName)
|
||||
|
||||
console.log(`✅ Configuration loaded successfully for ${cameraName}`)
|
||||
console.log(` - auto_start_recording_enabled: ${config.auto_start_recording_enabled}`)
|
||||
console.log(` - auto_recording_max_retries: ${config.auto_recording_max_retries}`)
|
||||
console.log(` - auto_recording_retry_delay_seconds: ${config.auto_recording_retry_delay_seconds}`)
|
||||
console.log(` - exposure_ms: ${config.exposure_ms}`)
|
||||
console.log(` - gain: ${config.gain}`)
|
||||
|
||||
} catch (error) {
|
||||
console.log(`❌ Configuration failed for ${cameraName}: ${error.message}`)
|
||||
}
|
||||
}
|
||||
|
||||
console.log('\n🎉 Camera configuration API test completed!')
|
||||
|
||||
} catch (error) {
|
||||
console.log(`❌ Test failed: ${error.message}`)
|
||||
}
|
||||
}
|
||||
|
||||
// Export for use in browser console or Node.js
|
||||
if (typeof module !== 'undefined' && module.exports) {
|
||||
module.exports = { TestVisionApiClient, testCameraConfigFix }
|
||||
} else {
|
||||
// Browser environment
|
||||
window.TestVisionApiClient = TestVisionApiClient
|
||||
window.testCameraConfigFix = testCameraConfigFix
|
||||
}
|
||||
|
||||
console.log('📝 Test script loaded. Run testCameraConfigFix() to test the fix.')
|
||||
229
test-camera-config.html
Normal file
229
test-camera-config.html
Normal file
@@ -0,0 +1,229 @@
|
||||
<!DOCTYPE html>
|
||||
<html lang="en">
|
||||
|
||||
<head>
|
||||
<meta charset="UTF-8">
|
||||
<meta name="viewport" content="width=device-width, initial-scale=1.0">
|
||||
<title>Camera Configuration Test</title>
|
||||
<style>
|
||||
body {
|
||||
font-family: Arial, sans-serif;
|
||||
max-width: 800px;
|
||||
margin: 0 auto;
|
||||
padding: 20px;
|
||||
}
|
||||
|
||||
.test-section {
|
||||
margin: 20px 0;
|
||||
padding: 15px;
|
||||
border: 1px solid #ddd;
|
||||
border-radius: 5px;
|
||||
}
|
||||
|
||||
.success {
|
||||
background-color: #d4edda;
|
||||
border-color: #c3e6cb;
|
||||
color: #155724;
|
||||
}
|
||||
|
||||
.error {
|
||||
background-color: #f8d7da;
|
||||
border-color: #f5c6cb;
|
||||
color: #721c24;
|
||||
}
|
||||
|
||||
.loading {
|
||||
background-color: #d1ecf1;
|
||||
border-color: #bee5eb;
|
||||
color: #0c5460;
|
||||
}
|
||||
|
||||
button {
|
||||
background-color: #007bff;
|
||||
color: white;
|
||||
border: none;
|
||||
padding: 10px 20px;
|
||||
border-radius: 5px;
|
||||
cursor: pointer;
|
||||
margin: 5px;
|
||||
}
|
||||
|
||||
button:hover {
|
||||
background-color: #0056b3;
|
||||
}
|
||||
|
||||
button:disabled {
|
||||
background-color: #6c757d;
|
||||
cursor: not-allowed;
|
||||
}
|
||||
|
||||
pre {
|
||||
background-color: #f8f9fa;
|
||||
padding: 10px;
|
||||
border-radius: 5px;
|
||||
overflow-x: auto;
|
||||
white-space: pre-wrap;
|
||||
}
|
||||
</style>
|
||||
</head>
|
||||
|
||||
<body>
|
||||
<h1>Camera Configuration API Test</h1>
|
||||
<p>This page tests the camera configuration API to verify the auto-recording fields issue is resolved.</p>
|
||||
|
||||
<div class="test-section">
|
||||
<h3>Test Camera Configuration API</h3>
|
||||
<button onclick="testCameraConfig()">Test Camera Config</button>
|
||||
<button onclick="testCameraList()">Test Camera List</button>
|
||||
<button onclick="testApiFixMethod()">Test API Fix Method</button>
|
||||
<div id="test-results"></div>
|
||||
</div>
|
||||
|
||||
<script src="test-api-fix.js"></script>
|
||||
<script>
|
||||
const API_BASE = 'http://vision:8000'; // Change to your vision API URL if different
|
||||
|
||||
async function testCameraList() {
|
||||
const resultsDiv = document.getElementById('test-results');
|
||||
resultsDiv.innerHTML = '<div class="loading">Testing camera list...</div>';
|
||||
|
||||
try {
|
||||
const response = await fetch(`${API_BASE}/cameras`);
|
||||
|
||||
if (!response.ok) {
|
||||
throw new Error(`HTTP ${response.status}: ${response.statusText}`);
|
||||
}
|
||||
|
||||
const cameras = await response.json();
|
||||
|
||||
resultsDiv.innerHTML = `
|
||||
<div class="success">
|
||||
<h4>✅ Camera List Success</h4>
|
||||
<p>Found ${Object.keys(cameras).length} cameras:</p>
|
||||
<pre>${JSON.stringify(cameras, null, 2)}</pre>
|
||||
</div>
|
||||
`;
|
||||
|
||||
// Store camera names for config test
|
||||
window.availableCameras = Object.keys(cameras);
|
||||
|
||||
} catch (error) {
|
||||
resultsDiv.innerHTML = `
|
||||
<div class="error">
|
||||
<h4>❌ Camera List Failed</h4>
|
||||
<p>Error: ${error.message}</p>
|
||||
</div>
|
||||
`;
|
||||
}
|
||||
}
|
||||
|
||||
async function testCameraConfig() {
|
||||
const resultsDiv = document.getElementById('test-results');
|
||||
|
||||
// First get camera list if we don't have it
|
||||
if (!window.availableCameras) {
|
||||
await testCameraList();
|
||||
if (!window.availableCameras || window.availableCameras.length === 0) {
|
||||
return;
|
||||
}
|
||||
}
|
||||
|
||||
const cameraName = window.availableCameras[0]; // Use first camera
|
||||
resultsDiv.innerHTML = `<div class="loading">Testing camera configuration for ${cameraName}...</div>`;
|
||||
|
||||
try {
|
||||
const response = await fetch(`${API_BASE}/cameras/${cameraName}/config`);
|
||||
|
||||
if (!response.ok) {
|
||||
const errorText = await response.text();
|
||||
throw new Error(`HTTP ${response.status}: ${response.statusText}\n${errorText}`);
|
||||
}
|
||||
|
||||
const config = await response.json();
|
||||
|
||||
// Check if auto-recording fields are present
|
||||
const hasAutoRecordingFields =
|
||||
'auto_start_recording_enabled' in config ||
|
||||
'auto_recording_max_retries' in config ||
|
||||
'auto_recording_retry_delay_seconds' in config;
|
||||
|
||||
resultsDiv.innerHTML = `
|
||||
<div class="success">
|
||||
<h4>✅ Camera Configuration Success</h4>
|
||||
<p>Camera: ${cameraName}</p>
|
||||
<p>Auto-recording fields present: ${hasAutoRecordingFields ? 'Yes' : 'No'}</p>
|
||||
<details>
|
||||
<summary>Full Configuration</summary>
|
||||
<pre>${JSON.stringify(config, null, 2)}</pre>
|
||||
</details>
|
||||
</div>
|
||||
`;
|
||||
|
||||
} catch (error) {
|
||||
resultsDiv.innerHTML = `
|
||||
<div class="error">
|
||||
<h4>❌ Camera Configuration Failed</h4>
|
||||
<p>Camera: ${cameraName}</p>
|
||||
<p>Error: ${error.message}</p>
|
||||
<details>
|
||||
<summary>Error Details</summary>
|
||||
<pre>${error.stack || error.toString()}</pre>
|
||||
</details>
|
||||
</div>
|
||||
`;
|
||||
}
|
||||
}
|
||||
|
||||
async function testApiFixMethod() {
|
||||
const resultsDiv = document.getElementById('test-results');
|
||||
resultsDiv.innerHTML = '<div class="loading">Testing API fix method...</div>';
|
||||
|
||||
try {
|
||||
const api = new TestVisionApiClient();
|
||||
|
||||
// Get cameras first
|
||||
const cameras = await api.getCameras();
|
||||
const cameraNames = Object.keys(cameras);
|
||||
|
||||
if (cameraNames.length === 0) {
|
||||
throw new Error('No cameras found');
|
||||
}
|
||||
|
||||
const cameraName = cameraNames[0];
|
||||
const config = await api.getCameraConfig(cameraName);
|
||||
|
||||
resultsDiv.innerHTML = `
|
||||
<div class="success">
|
||||
<h4>✅ API Fix Method Success</h4>
|
||||
<p>Camera: ${cameraName}</p>
|
||||
<p>Auto-recording fields:</p>
|
||||
<ul>
|
||||
<li>auto_start_recording_enabled: ${config.auto_start_recording_enabled}</li>
|
||||
<li>auto_recording_max_retries: ${config.auto_recording_max_retries}</li>
|
||||
<li>auto_recording_retry_delay_seconds: ${config.auto_recording_retry_delay_seconds}</li>
|
||||
</ul>
|
||||
<details>
|
||||
<summary>Full Configuration</summary>
|
||||
<pre>${JSON.stringify(config, null, 2)}</pre>
|
||||
</details>
|
||||
</div>
|
||||
`;
|
||||
|
||||
} catch (error) {
|
||||
resultsDiv.innerHTML = `
|
||||
<div class="error">
|
||||
<h4>❌ API Fix Method Failed</h4>
|
||||
<p>Error: ${error.message}</p>
|
||||
</div>
|
||||
`;
|
||||
}
|
||||
}
|
||||
|
||||
// Auto-run camera list test on page load
|
||||
window.addEventListener('load', () => {
|
||||
testCameraList();
|
||||
});
|
||||
</script>
|
||||
</body>
|
||||
|
||||
</html>
|
||||
Reference in New Issue
Block a user