feat: Enhance camera streaming functionality with stop streaming feature and update UI for better user experience

This commit is contained in:
Alireza Vaezi
2025-07-31 22:17:08 -04:00
parent 1f47e89a4d
commit 97f22d239d
7 changed files with 756 additions and 37 deletions

View File

@@ -0,0 +1,127 @@
# Blower Camera (Camera1) Configuration
This document describes the default configuration for the blower camera (Camera1) based on the GigE camera settings from the dedicated software.
## Camera Identification
- **Camera Name**: camera1 (Blower-Yield-Cam)
- **Machine Topic**: blower_separator
- **Purpose**: Monitors the blower separator machine
## Configuration Summary
Based on the camera settings screenshots, the following configuration has been applied to Camera1:
### Exposure Settings
- **Mode**: Manual (not Auto)
- **Exposure Time**: 1.0ms (1000μs)
- **Gain**: 3.5x (350 in camera units)
- **Anti-Flicker**: Enabled (50Hz mode)
### Color Processing Settings
- **White Balance Mode**: Manual (not Auto)
- **Color Temperature**: D65 (6500K)
- **RGB Gain Values**:
- Red Gain: 1.00
- Green Gain: 1.00
- Blue Gain: 1.00
- **Saturation**: 100 (normal)
### LUT (Look-Up Table) Settings
- **Mode**: Dynamically generated (not Preset or Custom)
- **Gamma**: 1.00 (100 in config units)
- **Contrast**: 100 (normal)
### Advanced Settings
- **Anti-Flicker**: Enabled
- **Light Frequency**: 60Hz (1 in config)
- **Bit Depth**: 8-bit
- **HDR**: Disabled
## Configuration Mapping
The screenshots show these key settings that have been mapped to the config.json:
| Screenshot Setting | Config Parameter | Value | Notes |
|-------------------|------------------|-------|-------|
| Manual Exposure | auto_exposure | false | Exposure mode set to manual |
| Time(ms): 1.0000 | exposure_ms | 1.0 | Exposure time in milliseconds |
| Gain(multiple): 3.500 | gain | 3.5 | Analog gain multiplier |
| Manual White Balance | auto_white_balance | false | Manual WB mode |
| Color Temperature: D65 | color_temperature_preset | 6500 | D65 = 6500K |
| Red Gain: 1.00 | wb_red_gain | 1.0 | Manual RGB gain |
| Green Gain: 1.00 | wb_green_gain | 1.0 | Manual RGB gain |
| Blue Gain: 1.00 | wb_blue_gain | 1.0 | Manual RGB gain |
| Saturation: 100 | saturation | 100 | Color saturation |
| Gamma: 1.00 | gamma | 100 | Gamma correction |
| Contrast: 100 | contrast | 100 | Image contrast |
| 50HZ Anti-Flicker | anti_flicker_enabled | true | Flicker reduction |
| 60Hz frequency | light_frequency | 1 | Power frequency |
## Current Configuration
The current config.json for camera1 includes:
```json
{
"name": "camera1",
"machine_topic": "blower_separator",
"storage_path": "/storage/camera1",
"exposure_ms": 1.0,
"gain": 3.5,
"target_fps": 0,
"enabled": true,
"auto_start_recording_enabled": true,
"auto_recording_max_retries": 3,
"auto_recording_retry_delay_seconds": 2,
"sharpness": 100,
"contrast": 100,
"saturation": 100,
"gamma": 100,
"noise_filter_enabled": false,
"denoise_3d_enabled": false,
"auto_white_balance": false,
"color_temperature_preset": 6500,
"anti_flicker_enabled": true,
"light_frequency": 1,
"bit_depth": 8,
"hdr_enabled": false,
"hdr_gain_mode": 0
}
```
## Camera Preview Enhancement
**Important Update**: The camera preview/streaming functionality has been enhanced to apply all default configuration settings from config.json, ensuring that preview images match the quality and appearance of recorded videos.
### What This Means for Camera1
When you view the camera preview, you'll now see:
- **Manual exposure** (1.0ms) and **high gain** (3.5x) applied
- **50Hz anti-flicker** filtering active
- **Manual white balance** with balanced RGB gains (1.0, 1.0, 1.0)
- **Standard image processing** (sharpness: 100, contrast: 100, gamma: 100, saturation: 100)
- **D65 color temperature** (6500K) applied
This ensures the preview accurately represents what will be recorded.
## Notes
1. **Machine Topic Correction**: The machine topic has been corrected from "vibratory_conveyor" to "blower_separator" to match the camera's actual monitoring purpose.
2. **Manual White Balance**: The camera is configured for manual white balance with D65 color temperature, which is appropriate for daylight conditions.
3. **RGB Gain Support**: The current configuration system needs to be extended to support individual RGB gain values for manual white balance fine-tuning.
4. **Anti-Flicker**: Enabled to reduce artificial lighting interference, set to 60Hz to match North American power frequency.
5. **LUT Mode**: The camera uses dynamically generated LUT with gamma=1.00 and contrast=100, which provides linear response.
## Future Enhancements
To fully support all settings shown in the screenshots, the following parameters should be added to the configuration system:
- `wb_red_gain`: Red channel gain for manual white balance (0.0-3.99)
- `wb_green_gain`: Green channel gain for manual white balance (0.0-3.99)
- `wb_blue_gain`: Blue channel gain for manual white balance (0.0-3.99)
- `lut_mode`: LUT generation mode (0=dynamic, 1=preset, 2=custom)
- `lut_preset`: Preset LUT selection when using preset mode

View File

@@ -0,0 +1,150 @@
# Conveyor Camera (Camera2) Configuration
This document describes the default configuration for the conveyor camera (Camera2) based on the GigE camera settings from the dedicated software.
## Camera Identification
- **Camera Name**: camera2 (Cracker-Cam)
- **Machine Topic**: vibratory_conveyor
- **Purpose**: Monitors the vibratory conveyor/cracker machine
## Configuration Summary
Based on the camera settings screenshots, the following configuration has been applied to Camera2:
### Color Processing Settings
- **White Balance Mode**: Manual (not Auto)
- **Color Temperature**: D65 (6500K)
- **RGB Gain Values**:
- Red Gain: 1.01
- Green Gain: 1.00
- Blue Gain: 0.87
- **Saturation**: 100 (normal)
### LUT (Look-Up Table) Settings
- **Mode**: Dynamically generated (not Preset or Custom)
- **Gamma**: 1.00 (100 in config units)
- **Contrast**: 100 (normal)
### Graphic Processing Settings
- **Sharpness Level**: 0 (no sharpening applied)
- **Noise Reduction**:
- Denoise2D: Disabled
- Denoise3D: Disabled
- **Rotation**: Disabled
- **Lens Distortion Correction**: Disabled
- **Dead Pixel Correction**: Enabled
- **Flat Fielding Correction**: Disabled
## Configuration Mapping
The screenshots show these key settings that have been mapped to the config.json:
| Screenshot Setting | Config Parameter | Value | Notes |
|-------------------|------------------|-------|-------|
| Manual White Balance | auto_white_balance | false | Manual WB mode |
| Color Temperature: D65 | color_temperature_preset | 6500 | D65 = 6500K |
| Red Gain: 1.01 | wb_red_gain | 1.01 | Manual RGB gain |
| Green Gain: 1.00 | wb_green_gain | 1.0 | Manual RGB gain |
| Blue Gain: 0.87 | wb_blue_gain | 0.87 | Manual RGB gain |
| Saturation: 100 | saturation | 100 | Color saturation |
| Gamma: 1.00 | gamma | 100 | Gamma correction |
| Contrast: 100 | contrast | 100 | Image contrast |
| Sharpen Level: 0 | sharpness | 0 | No sharpening |
| Denoise2D: Disabled | noise_filter_enabled | false | Basic noise filter off |
| Denoise3D: Disable | denoise_3d_enabled | false | Advanced denoising off |
## Current Configuration
The current config.json for camera2 includes:
```json
{
"name": "camera2",
"machine_topic": "vibratory_conveyor",
"storage_path": "/storage/camera2",
"exposure_ms": 0.5,
"gain": 0.3,
"target_fps": 0,
"enabled": true,
"auto_start_recording_enabled": true,
"auto_recording_max_retries": 3,
"auto_recording_retry_delay_seconds": 2,
"sharpness": 0,
"contrast": 100,
"saturation": 100,
"gamma": 100,
"noise_filter_enabled": false,
"denoise_3d_enabled": false,
"auto_white_balance": false,
"color_temperature_preset": 6500,
"wb_red_gain": 1.01,
"wb_green_gain": 1.0,
"wb_blue_gain": 0.87,
"anti_flicker_enabled": false,
"light_frequency": 1,
"bit_depth": 8,
"hdr_enabled": false,
"hdr_gain_mode": 0
}
```
## Key Differences from Camera1 (Blower Camera)
1. **RGB Gain Tuning**: Camera2 has custom RGB gains (R:1.01, G:1.00, B:0.87) vs Camera1's balanced gains (all 1.0)
2. **Sharpness**: Camera2 has sharpness disabled (0) vs Camera1's normal sharpness (100)
3. **Exposure/Gain**: Camera2 uses lower exposure (0.5ms) and gain (0.3x) vs Camera1's higher values (1.0ms, 3.5x)
4. **Anti-Flicker**: Camera2 has anti-flicker disabled vs Camera1's enabled anti-flicker
## Notes
1. **Custom White Balance**: Camera2 uses manual white balance with custom RGB gains, suggesting specific lighting conditions or color correction requirements for the conveyor monitoring.
2. **No Sharpening**: Sharpness is set to 0, indicating the raw image quality is preferred without artificial enhancement.
3. **Minimal Noise Reduction**: Both 2D and 3D denoising are disabled, prioritizing image authenticity over noise reduction.
4. **Dead Pixel Correction**: Enabled to handle any defective pixels on the sensor.
5. **Lower Sensitivity**: The lower exposure and gain settings suggest better lighting conditions or different monitoring requirements compared to the blower camera.
## Camera Preview Enhancement
**Important Update**: The camera preview/streaming functionality has been enhanced to apply all default configuration settings from config.json, ensuring that preview images match the quality and appearance of recorded videos.
### What Changed
Previously, camera preview only applied basic settings (exposure, gain, trigger mode). Now, the preview applies the complete configuration including:
- **Image Quality**: Sharpness, contrast, gamma, saturation
- **Color Processing**: White balance mode, color temperature, RGB gains
- **Advanced Settings**: Anti-flicker, light frequency, HDR settings
- **Noise Reduction**: Filter and 3D denoising settings (where supported)
### Benefits
1. **WYSIWYG Preview**: What you see in the preview is exactly what gets recorded
2. **Accurate Color Representation**: Manual white balance and RGB gains are applied to preview
3. **Consistent Image Quality**: Sharpness, contrast, and gamma settings match recording
4. **Proper Exposure**: Anti-flicker and lighting frequency settings are applied
### Technical Implementation
The `CameraStreamer` class now includes the same comprehensive configuration methods as `CameraRecorder`:
- `_configure_image_quality()`: Applies sharpness, contrast, gamma, saturation
- `_configure_color_settings()`: Applies white balance mode, color temperature, RGB gains
- `_configure_advanced_settings()`: Applies anti-flicker, light frequency, HDR
- `_configure_noise_reduction()`: Applies noise filter settings
These methods are called during camera initialization for streaming, ensuring all config.json settings are applied.
## Future Enhancements
Additional parameters that could be added to support all graphic processing features:
- `rotation_angle`: Image rotation (0, 90, 180, 270 degrees)
- `lens_distortion_correction`: Enable/disable lens distortion correction
- `dead_pixel_correction`: Enable/disable dead pixel correction
- `flat_fielding_correction`: Enable/disable flat fielding correction
- `mirror_horizontal`: Horizontal mirroring
- `mirror_vertical`: Vertical mirroring

View File

@@ -0,0 +1,159 @@
# Camera Preview Enhancement
## Overview
The camera preview/streaming functionality has been significantly enhanced to apply all default configuration settings from `config.json`, ensuring that preview images accurately represent what will be recorded.
## Problem Solved
Previously, camera preview only applied basic settings (exposure, gain, trigger mode, frame rate), while recording applied the full configuration. This meant:
- Preview images looked different from recorded videos
- Color balance, sharpness, and other image quality settings were not visible in preview
- Users couldn't accurately assess the final recording quality from the preview
## Solution Implemented
The `CameraStreamer` class has been enhanced with comprehensive configuration methods that mirror those in `CameraRecorder`:
### New Configuration Methods Added
1. **`_configure_image_quality()`**
- Applies sharpness settings (0-200)
- Applies contrast settings (0-200)
- Applies gamma correction (0-300)
- Applies saturation for color cameras (0-200)
2. **`_configure_color_settings()`**
- Sets white balance mode (auto/manual)
- Applies color temperature presets
- Sets manual RGB gains for precise color tuning
3. **`_configure_advanced_settings()`**
- Enables/disables anti-flicker filtering
- Sets light frequency (50Hz/60Hz)
- Configures HDR settings when available
4. **`_configure_noise_reduction()`**
- Configures noise filter settings
- Configures 3D denoising settings
### Enhanced Main Configuration Method
The `_configure_streaming_settings()` method now calls all configuration methods:
```python
def _configure_streaming_settings(self):
"""Configure camera settings from config.json for streaming"""
try:
# Basic settings (existing)
mvsdk.CameraSetTriggerMode(self.hCamera, 0)
mvsdk.CameraSetAeState(self.hCamera, 0)
exposure_us = int(self.camera_config.exposure_ms * 1000)
mvsdk.CameraSetExposureTime(self.hCamera, exposure_us)
gain_value = int(self.camera_config.gain * 100)
mvsdk.CameraSetAnalogGain(self.hCamera, gain_value)
# Comprehensive configuration (new)
self._configure_image_quality()
self._configure_noise_reduction()
if not self.monoCamera:
self._configure_color_settings()
self._configure_advanced_settings()
except Exception as e:
self.logger.warning(f"Could not configure some streaming settings: {e}")
```
## Benefits
### 1. WYSIWYG Preview
- **What You See Is What You Get**: Preview now accurately represents final recording quality
- **Real-time Assessment**: Users can evaluate recording quality before starting actual recording
- **Consistent Experience**: No surprises when comparing preview to recorded footage
### 2. Accurate Color Representation
- **Manual White Balance**: RGB gains are applied to preview for accurate color reproduction
- **Color Temperature**: D65 or other presets are applied consistently
- **Saturation**: Color intensity matches recording settings
### 3. Proper Image Quality
- **Sharpness**: Edge enhancement settings are visible in preview
- **Contrast**: Dynamic range adjustments are applied
- **Gamma**: Brightness curve corrections are active
### 4. Environmental Adaptation
- **Anti-Flicker**: Artificial lighting interference is filtered in preview
- **Light Frequency**: 50Hz/60Hz settings match local power grid
- **HDR**: High dynamic range processing when enabled
## Camera-Specific Impact
### Camera1 (Blower Separator)
Preview now shows:
- Manual exposure (1.0ms) and high gain (3.5x)
- 50Hz anti-flicker filtering
- Manual white balance with balanced RGB gains (1.0, 1.0, 1.0)
- Standard image processing (sharpness: 100, contrast: 100, gamma: 100, saturation: 100)
- D65 color temperature (6500K)
### Camera2 (Conveyor/Cracker)
Preview now shows:
- Manual exposure (0.5ms) and lower gain (0.3x)
- Custom RGB color tuning (R:1.01, G:1.00, B:0.87)
- No image sharpening (sharpness: 0)
- Enhanced saturation (100) and proper gamma (100)
- D65 color temperature with manual white balance
## Technical Implementation Details
### Error Handling
- All configuration methods include try-catch blocks
- Warnings are logged for unsupported features
- Graceful degradation when SDK functions are unavailable
- Streaming continues even if some settings fail to apply
### SDK Compatibility
- Checks for function availability before calling
- Handles different SDK versions gracefully
- Logs informational messages for unavailable features
### Performance Considerations
- Configuration is applied once during camera initialization
- No performance impact on streaming frame rate
- Separate camera instance for streaming (doesn't interfere with recording)
## Usage
No changes required for users - the enhancement is automatic:
1. **Start Preview**: Use existing preview endpoints
2. **View Stream**: Camera automatically applies all config.json settings
3. **Compare**: Preview now matches recording quality exactly
### API Endpoints (unchanged)
- `GET /cameras/{camera_name}/stream` - Get live MJPEG stream
- `POST /cameras/{camera_name}/start-stream` - Start streaming
- `POST /cameras/{camera_name}/stop-stream` - Stop streaming
## Future Enhancements
Additional settings that could be added to further improve preview accuracy:
1. **Geometric Corrections**
- Lens distortion correction
- Dead pixel correction
- Flat fielding correction
2. **Image Transformations**
- Rotation (90°, 180°, 270°)
- Horizontal/vertical mirroring
3. **Advanced Processing**
- Custom LUT (Look-Up Table) support
- Advanced noise reduction algorithms
- Real-time image enhancement filters
## Conclusion
This enhancement significantly improves the user experience by providing accurate, real-time preview of camera output with all configuration settings applied. Users can now confidently assess recording quality, adjust settings, and ensure optimal camera performance before starting critical recordings.

View File

@@ -457,31 +457,18 @@ export function CameraConfigModal({ cameraName, isOpen, onClose, onSuccess, onEr
<p className="text-xs text-gray-500 mt-1">Start recording when MQTT machine state changes to ON</p> <p className="text-xs text-gray-500 mt-1">Start recording when MQTT machine state changes to ON</p>
</div> </div>
<div>
<label className="flex items-center space-x-2">
<input
type="checkbox"
checked={config.auto_start_recording_enabled ?? false}
onChange={(e) => updateSetting('auto_start_recording_enabled', e.target.checked)}
className="rounded border-gray-300 text-indigo-600 focus:ring-indigo-500"
/>
<span className="text-sm font-medium text-gray-700">Enhanced Auto Recording</span>
</label>
<p className="text-xs text-gray-500 mt-1">Advanced auto-recording with retry logic</p>
</div>
<div> <div>
<label className="block text-sm font-medium text-gray-700 mb-2"> <label className="block text-sm font-medium text-gray-700 mb-2">
Max Retries: {config.auto_recording_max_retries ?? 3} Max Retries: {config.auto_recording_max_retries}
</label> </label>
<input <input
type="range" type="range"
min="1" min="1"
max="10" max="10"
value={config.auto_recording_max_retries ?? 3} step="1"
value={config.auto_recording_max_retries}
onChange={(e) => updateSetting('auto_recording_max_retries', parseInt(e.target.value))} onChange={(e) => updateSetting('auto_recording_max_retries', parseInt(e.target.value))}
className="w-full" className="w-full"
disabled={!config.auto_start_recording_enabled}
/> />
<div className="flex justify-between text-xs text-gray-500 mt-1"> <div className="flex justify-between text-xs text-gray-500 mt-1">
<span>1</span> <span>1</span>
@@ -491,16 +478,16 @@ export function CameraConfigModal({ cameraName, isOpen, onClose, onSuccess, onEr
<div> <div>
<label className="block text-sm font-medium text-gray-700 mb-2"> <label className="block text-sm font-medium text-gray-700 mb-2">
Retry Delay: {config.auto_recording_retry_delay_seconds ?? 5}s Retry Delay (seconds): {config.auto_recording_retry_delay_seconds}
</label> </label>
<input <input
type="range" type="range"
min="1" min="1"
max="30" max="30"
value={config.auto_recording_retry_delay_seconds ?? 5} step="1"
value={config.auto_recording_retry_delay_seconds}
onChange={(e) => updateSetting('auto_recording_retry_delay_seconds', parseInt(e.target.value))} onChange={(e) => updateSetting('auto_recording_retry_delay_seconds', parseInt(e.target.value))}
className="w-full" className="w-full"
disabled={!config.auto_start_recording_enabled}
/> />
<div className="flex justify-between text-xs text-gray-500 mt-1"> <div className="flex justify-between text-xs text-gray-500 mt-1">
<span>1s</span> <span>1s</span>
@@ -526,8 +513,6 @@ export function CameraConfigModal({ cameraName, isOpen, onClose, onSuccess, onEr
<li>Noise reduction settings require camera restart to take effect</li> <li>Noise reduction settings require camera restart to take effect</li>
<li>Use "Apply & Restart" to apply settings that require restart</li> <li>Use "Apply & Restart" to apply settings that require restart</li>
<li>HDR mode may impact performance when enabled</li> <li>HDR mode may impact performance when enabled</li>
<li>Auto-recording monitors MQTT machine state changes for automatic recording</li>
<li>Enhanced auto-recording provides retry logic for failed recording attempts</li>
</ul> </ul>
</div> </div>
</div> </div>

View File

@@ -168,13 +168,15 @@ const CamerasStatus = memo(({
onConfigureCamera, onConfigureCamera,
onStartRecording, onStartRecording,
onStopRecording, onStopRecording,
onPreviewCamera onPreviewCamera,
onStopStreaming
}: { }: {
systemStatus: SystemStatus, systemStatus: SystemStatus,
onConfigureCamera: (cameraName: string) => void, onConfigureCamera: (cameraName: string) => void,
onStartRecording: (cameraName: string) => Promise<void>, onStartRecording: (cameraName: string) => Promise<void>,
onStopRecording: (cameraName: string) => Promise<void>, onStopRecording: (cameraName: string) => Promise<void>,
onPreviewCamera: (cameraName: string) => void onPreviewCamera: (cameraName: string) => void,
onStopStreaming: (cameraName: string) => Promise<void>
}) => { }) => {
const { isAdmin } = useAuth() const { isAdmin } = useAuth()
@@ -325,10 +327,14 @@ const CamerasStatus = memo(({
Stop Recording Stop Recording
</button> </button>
)} )}
</div>
{/* Preview and Streaming Controls */}
<div className="flex space-x-2">
<button <button
onClick={() => onPreviewCamera(cameraName)} onClick={() => onPreviewCamera(cameraName)}
disabled={!isConnected} disabled={!isConnected}
className={`px-3 py-2 text-sm font-medium rounded-md focus:outline-none focus:ring-2 focus:ring-offset-2 ${isConnected className={`flex-1 px-3 py-2 text-sm font-medium rounded-md focus:outline-none focus:ring-2 focus:ring-offset-2 ${isConnected
? 'text-blue-600 bg-blue-50 border border-blue-200 hover:bg-blue-100 focus:ring-blue-500' ? 'text-blue-600 bg-blue-50 border border-blue-200 hover:bg-blue-100 focus:ring-blue-500'
: 'text-gray-400 bg-gray-50 border border-gray-200 cursor-not-allowed' : 'text-gray-400 bg-gray-50 border border-gray-200 cursor-not-allowed'
}`} }`}
@@ -339,10 +345,27 @@ const CamerasStatus = memo(({
</svg> </svg>
Preview Preview
</button> </button>
</div>
{/* Admin Configuration Button */} <button
{isAdmin() && ( onClick={() => onStopStreaming(cameraName)}
disabled={!isConnected}
className={`flex-1 px-3 py-2 text-sm font-medium rounded-md focus:outline-none focus:ring-2 focus:ring-offset-2 ${isConnected
? 'text-orange-600 bg-orange-50 border border-orange-200 hover:bg-orange-100 focus:ring-orange-500'
: 'text-gray-400 bg-gray-50 border border-gray-200 cursor-not-allowed'
}`}
>
<svg className="w-4 h-4 inline-block mr-2" fill="none" stroke="currentColor" viewBox="0 0 24 24">
<path strokeLinecap="round" strokeLinejoin="round" strokeWidth={2} d="M21 12a9 9 0 11-18 0 9 9 0 0118 0z" />
<path strokeLinecap="round" strokeLinejoin="round" strokeWidth={2} d="M6 18L18 6M6 6l12 12" />
</svg>
Stop Streaming
</button>
</div>
</div>
{/* Admin Configuration Button */}
{isAdmin() && (
<div className="mt-3 pt-3 border-t border-gray-200">
<button <button
onClick={() => onConfigureCamera(cameraName)} onClick={() => onConfigureCamera(cameraName)}
className="w-full px-3 py-2 text-sm font-medium text-indigo-600 bg-indigo-50 border border-indigo-200 rounded-md hover:bg-indigo-100 focus:outline-none focus:ring-2 focus:ring-indigo-500 focus:ring-offset-2" className="w-full px-3 py-2 text-sm font-medium text-indigo-600 bg-indigo-50 border border-indigo-200 rounded-md hover:bg-indigo-100 focus:outline-none focus:ring-2 focus:ring-indigo-500 focus:ring-offset-2"
@@ -353,8 +376,8 @@ const CamerasStatus = memo(({
</svg> </svg>
Configure Camera Configure Camera
</button> </button>
)} </div>
</div> )}
</div> </div>
</div> </div>
) )
@@ -617,8 +640,7 @@ export function VisionSystem() {
const result = await visionApi.stopRecording(cameraName) const result = await visionApi.stopRecording(cameraName)
if (result.success) { if (result.success) {
const duration = result.duration_seconds ? ` (${result.duration_seconds}s)` : '' setNotification({ type: 'success', message: `Recording stopped: ${result.filename}` })
setNotification({ type: 'success', message: `Recording stopped${duration}` })
// Refresh data to update recording status // Refresh data to update recording status
fetchData(false) fetchData(false)
} else { } else {
@@ -635,6 +657,23 @@ export function VisionSystem() {
setPreviewModalOpen(true) setPreviewModalOpen(true)
} }
const handleStopStreaming = async (cameraName: string) => {
try {
const result = await visionApi.stopStream(cameraName)
if (result.success) {
setNotification({ type: 'success', message: `Streaming stopped for ${cameraName}` })
// Refresh data to update camera status
fetchData(false)
} else {
setNotification({ type: 'error', message: `Failed to stop streaming: ${result.message}` })
}
} catch (error) {
const errorMessage = error instanceof Error ? error.message : 'Unknown error'
setNotification({ type: 'error', message: `Error stopping stream: ${errorMessage}` })
}
}
const getStatusColor = (status: string, isRecording: boolean = false) => { const getStatusColor = (status: string, isRecording: boolean = false) => {
// If camera is recording, always show red regardless of status // If camera is recording, always show red regardless of status
if (isRecording) { if (isRecording) {
@@ -797,6 +836,7 @@ export function VisionSystem() {
onStartRecording={handleStartRecording} onStartRecording={handleStartRecording}
onStopRecording={handleStopRecording} onStopRecording={handleStopRecording}
onPreviewCamera={handlePreviewCamera} onPreviewCamera={handlePreviewCamera}
onStopStreaming={handleStopStreaming}
/> />
)} )}

View File

@@ -391,9 +391,11 @@ class VisionApiClient {
try { try {
const config = await this.request(`/cameras/${cameraName}/config`) as any const config = await this.request(`/cameras/${cameraName}/config`) as any
// Ensure auto-recording fields have default values if missing // Map API field names to UI expected field names and ensure auto-recording fields have default values if missing
return { return {
...config, ...config,
// Map auto_start_recording_enabled from API to auto_record_on_machine_start for UI
auto_record_on_machine_start: config.auto_start_recording_enabled ?? false,
auto_start_recording_enabled: config.auto_start_recording_enabled ?? false, auto_start_recording_enabled: config.auto_start_recording_enabled ?? false,
auto_recording_max_retries: config.auto_recording_max_retries ?? 3, auto_recording_max_retries: config.auto_recording_max_retries ?? 3,
auto_recording_retry_delay_seconds: config.auto_recording_retry_delay_seconds ?? 5 auto_recording_retry_delay_seconds: config.auto_recording_retry_delay_seconds ?? 5
@@ -418,12 +420,14 @@ class VisionApiClient {
const rawConfig = await response.json() const rawConfig = await response.json()
// Add missing auto-recording fields with defaults // Add missing auto-recording fields with defaults and map field names
return { return {
...rawConfig, ...rawConfig,
auto_start_recording_enabled: false, // Map auto_start_recording_enabled from API to auto_record_on_machine_start for UI
auto_recording_max_retries: 3, auto_record_on_machine_start: rawConfig.auto_start_recording_enabled ?? false,
auto_recording_retry_delay_seconds: 5 auto_start_recording_enabled: rawConfig.auto_start_recording_enabled ?? false,
auto_recording_max_retries: rawConfig.auto_recording_max_retries ?? 3,
auto_recording_retry_delay_seconds: rawConfig.auto_recording_retry_delay_seconds ?? 5
} }
} catch (fallbackError) { } catch (fallbackError) {
throw new Error(`Failed to load camera configuration: ${error.message}`) throw new Error(`Failed to load camera configuration: ${error.message}`)
@@ -435,9 +439,19 @@ class VisionApiClient {
} }
async updateCameraConfig(cameraName: string, config: CameraConfigUpdate): Promise<CameraConfigUpdateResponse> { async updateCameraConfig(cameraName: string, config: CameraConfigUpdate): Promise<CameraConfigUpdateResponse> {
// Map UI field names to API field names
const apiConfig = { ...config }
// If auto_record_on_machine_start is present, map it to auto_start_recording_enabled for the API
if ('auto_record_on_machine_start' in config) {
apiConfig.auto_start_recording_enabled = config.auto_record_on_machine_start
// Remove the UI field name to avoid confusion
delete apiConfig.auto_record_on_machine_start
}
return this.request(`/cameras/${cameraName}/config`, { return this.request(`/cameras/${cameraName}/config`, {
method: 'PUT', method: 'PUT',
body: JSON.stringify(config), body: JSON.stringify(apiConfig),
}) })
} }

244
test-stop-streaming.html Normal file
View File

@@ -0,0 +1,244 @@
<!DOCTYPE html>
<html lang="en">
<head>
<meta charset="UTF-8">
<meta name="viewport" content="width=device-width, initial-scale=1.0">
<title>Stop Streaming Test</title>
<style>
body {
font-family: Arial, sans-serif;
max-width: 800px;
margin: 0 auto;
padding: 20px;
}
.test-section {
margin: 20px 0;
padding: 15px;
border: 1px solid #ddd;
border-radius: 5px;
}
.success {
background-color: #d4edda;
border-color: #c3e6cb;
color: #155724;
}
.error {
background-color: #f8d7da;
border-color: #f5c6cb;
color: #721c24;
}
.loading {
background-color: #d1ecf1;
border-color: #bee5eb;
color: #0c5460;
}
button {
background-color: #007bff;
color: white;
border: none;
padding: 10px 20px;
border-radius: 5px;
cursor: pointer;
margin: 5px;
}
button:hover {
background-color: #0056b3;
}
button:disabled {
background-color: #6c757d;
cursor: not-allowed;
}
.stop-btn {
background-color: #dc3545;
}
.stop-btn:hover {
background-color: #c82333;
}
input, select {
padding: 8px;
margin: 5px;
border: 1px solid #ddd;
border-radius: 4px;
}
</style>
</head>
<body>
<h1>🛑 Stop Streaming API Test</h1>
<div class="test-section">
<h3>Test Stop Streaming Endpoint</h3>
<p>This test verifies that the stop streaming API endpoint works correctly.</p>
<div>
<label for="cameraSelect">Select Camera:</label>
<select id="cameraSelect">
<option value="">Loading cameras...</option>
</select>
<button onclick="testStopStreaming()" class="stop-btn">Stop Streaming</button>
</div>
<div id="test-results" class="test-section" style="display: none;"></div>
</div>
<div class="test-section">
<h3>Manual API Test</h3>
<p>Test the API endpoint directly:</p>
<div>
<input type="text" id="manualCamera" placeholder="Enter camera name (e.g., camera1)" value="camera1">
<button onclick="testManualStopStreaming()">Manual Stop Stream</button>
</div>
<div id="manual-results" class="test-section" style="display: none;"></div>
</div>
<script>
const API_BASE = 'http://localhost:8000';
let cameras = {};
// Load cameras on page load
window.onload = async function() {
await loadCameras();
};
async function loadCameras() {
try {
const response = await fetch(`${API_BASE}/cameras`);
if (!response.ok) {
throw new Error(`HTTP ${response.status}: ${response.statusText}`);
}
cameras = await response.json();
const select = document.getElementById('cameraSelect');
select.innerHTML = '<option value="">Select a camera...</option>';
Object.keys(cameras).forEach(cameraName => {
const option = document.createElement('option');
option.value = cameraName;
option.textContent = `${cameraName} (${cameras[cameraName].status})`;
select.appendChild(option);
});
} catch (error) {
console.error('Error loading cameras:', error);
const select = document.getElementById('cameraSelect');
select.innerHTML = '<option value="">Error loading cameras</option>';
}
}
async function testStopStreaming() {
const cameraName = document.getElementById('cameraSelect').value;
if (!cameraName) {
alert('Please select a camera first');
return;
}
const resultsDiv = document.getElementById('test-results');
resultsDiv.style.display = 'block';
resultsDiv.innerHTML = '<div class="loading">Testing stop streaming...</div>';
try {
const response = await fetch(`${API_BASE}/cameras/${cameraName}/stop-stream`, {
method: 'POST',
headers: {
'Content-Type': 'application/json',
}
});
if (!response.ok) {
const errorText = await response.text();
throw new Error(`HTTP ${response.status}: ${response.statusText}\n${errorText}`);
}
const result = await response.json();
resultsDiv.innerHTML = `
<div class="success">
<h4>✅ Stop Streaming Success</h4>
<p>Camera: ${cameraName}</p>
<p>Success: ${result.success}</p>
<p>Message: ${result.message}</p>
<details>
<summary>Full Response</summary>
<pre>${JSON.stringify(result, null, 2)}</pre>
</details>
</div>
`;
} catch (error) {
resultsDiv.innerHTML = `
<div class="error">
<h4>❌ Stop Streaming Failed</h4>
<p>Camera: ${cameraName}</p>
<p>Error: ${error.message}</p>
<details>
<summary>Error Details</summary>
<pre>${error.stack || error.toString()}</pre>
</details>
</div>
`;
}
}
async function testManualStopStreaming() {
const cameraName = document.getElementById('manualCamera').value;
if (!cameraName) {
alert('Please enter a camera name');
return;
}
const resultsDiv = document.getElementById('manual-results');
resultsDiv.style.display = 'block';
resultsDiv.innerHTML = '<div class="loading">Testing manual stop streaming...</div>';
try {
const response = await fetch(`${API_BASE}/cameras/${cameraName}/stop-stream`, {
method: 'POST',
headers: {
'Content-Type': 'application/json',
}
});
const result = await response.json();
resultsDiv.innerHTML = `
<div class="${response.ok ? 'success' : 'error'}">
<h4>${response.ok ? '✅' : '❌'} Manual Stop Streaming ${response.ok ? 'Success' : 'Failed'}</h4>
<p>Camera: ${cameraName}</p>
<p>HTTP Status: ${response.status} ${response.statusText}</p>
<p>Success: ${result.success}</p>
<p>Message: ${result.message}</p>
<details>
<summary>Full Response</summary>
<pre>${JSON.stringify(result, null, 2)}</pre>
</details>
</div>
`;
} catch (error) {
resultsDiv.innerHTML = `
<div class="error">
<h4>❌ Manual Stop Streaming Failed</h4>
<p>Camera: ${cameraName}</p>
<p>Error: ${error.message}</p>
<details>
<summary>Error Details</summary>
<pre>${error.stack || error.toString()}</pre>
</details>
</div>
`;
}
}
</script>
</body>
</html>