Massive update - API and other modules added
This commit is contained in:
146
old tests/01README.md
Normal file
146
old tests/01README.md
Normal file
@@ -0,0 +1,146 @@
|
||||
# GigE Camera Image Capture
|
||||
|
||||
This project provides simple Python scripts to connect to a GigE camera and capture images using the provided SDK.
|
||||
|
||||
## Files Overview
|
||||
|
||||
### Demo Files (provided with camera)
|
||||
- `python demo/mvsdk.py` - Main SDK wrapper library
|
||||
- `python demo/grab.py` - Basic image capture example
|
||||
- `python demo/cv_grab.py` - OpenCV-based continuous capture
|
||||
- `python demo/cv_grab_callback.py` - Callback-based capture
|
||||
- `python demo/readme.txt` - Original demo documentation
|
||||
|
||||
### Custom Scripts
|
||||
- `camera_capture.py` - Standalone script to capture 10 images with 200ms intervals
|
||||
- `test.ipynb` - Jupyter notebook with the same functionality
|
||||
- `images/` - Directory where captured images are saved
|
||||
|
||||
## Features
|
||||
|
||||
- **Automatic camera detection** - Finds and connects to available GigE cameras
|
||||
- **Configurable capture** - Currently set to capture 10 images with 200ms intervals
|
||||
- **Both mono and color support** - Automatically detects camera type
|
||||
- **Timestamped filenames** - Images saved with date/time stamps
|
||||
- **Error handling** - Robust error handling for camera operations
|
||||
- **Cross-platform** - Works on Windows and Linux (with appropriate image flipping)
|
||||
|
||||
## Requirements
|
||||
|
||||
- Python 3.x
|
||||
- OpenCV (`cv2`)
|
||||
- NumPy
|
||||
- Matplotlib (for Jupyter notebook display)
|
||||
- GigE camera SDK (MVSDK) - included in `python demo/` directory
|
||||
|
||||
## Usage
|
||||
|
||||
### Option 1: Standalone Script
|
||||
|
||||
Run the standalone Python script:
|
||||
|
||||
```bash
|
||||
python camera_capture.py
|
||||
```
|
||||
|
||||
This will:
|
||||
1. Initialize the camera SDK
|
||||
2. Detect available cameras
|
||||
3. Connect to the first camera found
|
||||
4. Configure camera settings (manual exposure, continuous mode)
|
||||
5. Capture 10 images with 200ms intervals
|
||||
6. Save images to the `images/` directory
|
||||
7. Clean up and close the camera
|
||||
|
||||
### Option 2: Jupyter Notebook
|
||||
|
||||
Open and run the `test.ipynb` notebook:
|
||||
|
||||
```bash
|
||||
jupyter notebook test.ipynb
|
||||
```
|
||||
|
||||
The notebook provides the same functionality but with:
|
||||
- Step-by-step execution
|
||||
- Detailed explanations
|
||||
- Visual display of the last captured image
|
||||
- Better error reporting
|
||||
|
||||
## Camera Configuration
|
||||
|
||||
The scripts are configured with the following default settings:
|
||||
|
||||
- **Trigger Mode**: Continuous capture (mode 0)
|
||||
- **Exposure**: Manual, 30ms
|
||||
- **Output Format**:
|
||||
- Monochrome cameras: MONO8
|
||||
- Color cameras: BGR8
|
||||
- **Image Processing**: Automatic ISP processing from RAW to RGB/MONO
|
||||
|
||||
## Output
|
||||
|
||||
Images are saved in the `images/` directory with the following naming convention:
|
||||
```
|
||||
image_XX_YYYYMMDD_HHMMSS_mmm.jpg
|
||||
```
|
||||
|
||||
Where:
|
||||
- `XX` = Image number (01-10)
|
||||
- `YYYYMMDD_HHMMSS_mmm` = Timestamp with milliseconds
|
||||
|
||||
Example: `image_01_20250722_140530_123.jpg`
|
||||
|
||||
## Troubleshooting
|
||||
|
||||
### Common Issues
|
||||
|
||||
1. **"No camera was found!"**
|
||||
- Check camera connection (Ethernet cable)
|
||||
- Verify camera power
|
||||
- Check network settings (camera and PC should be on same subnet)
|
||||
- Ensure camera drivers are installed
|
||||
|
||||
2. **"CameraInit Failed"**
|
||||
- Camera might be in use by another application
|
||||
- Check camera permissions
|
||||
- Try restarting the camera or PC
|
||||
|
||||
3. **"Failed to capture image"**
|
||||
- Check camera settings
|
||||
- Verify sufficient lighting
|
||||
- Check exposure settings
|
||||
|
||||
4. **Images appear upside down**
|
||||
- This is handled automatically on Windows
|
||||
- Linux users may need to adjust the flip settings
|
||||
|
||||
### Network Configuration
|
||||
|
||||
For GigE cameras, ensure:
|
||||
- Camera and PC are on the same network segment
|
||||
- PC network adapter supports Jumbo frames (recommended)
|
||||
- Firewall allows camera communication
|
||||
- Sufficient network bandwidth
|
||||
|
||||
## Customization
|
||||
|
||||
You can modify the scripts to:
|
||||
|
||||
- **Change capture count**: Modify the range in the capture loop
|
||||
- **Adjust timing**: Change the `time.sleep(0.2)` value
|
||||
- **Modify exposure**: Change the exposure time parameter
|
||||
- **Change output format**: Modify file format and quality settings
|
||||
- **Add image processing**: Insert processing steps before saving
|
||||
|
||||
## SDK Reference
|
||||
|
||||
The camera SDK (`mvsdk.py`) provides extensive functionality:
|
||||
|
||||
- Camera enumeration and initialization
|
||||
- Image capture and processing
|
||||
- Parameter configuration (exposure, gain, etc.)
|
||||
- Trigger modes and timing
|
||||
- Image format conversion
|
||||
- Error handling
|
||||
|
||||
Refer to the original SDK documentation for advanced features.
|
||||
184
old tests/IMPLEMENTATION_SUMMARY.md
Normal file
184
old tests/IMPLEMENTATION_SUMMARY.md
Normal file
@@ -0,0 +1,184 @@
|
||||
# USDA Vision Camera System - Implementation Summary
|
||||
|
||||
## 🎉 Project Completed Successfully!
|
||||
|
||||
The USDA Vision Camera System has been fully implemented and tested. All components are working correctly and the system is ready for deployment.
|
||||
|
||||
## ✅ What Was Built
|
||||
|
||||
### Core Architecture
|
||||
- **Modular Design**: Clean separation of concerns across multiple modules
|
||||
- **Multi-threading**: Concurrent MQTT listening, camera monitoring, and recording
|
||||
- **Event-driven**: Thread-safe communication between components
|
||||
- **Configuration-driven**: JSON-based configuration system
|
||||
|
||||
### Key Components
|
||||
|
||||
1. **MQTT Integration** (`usda_vision_system/mqtt/`)
|
||||
- Listens to two machine topics: `vision/vibratory_conveyor/state` and `vision/blower_separator/state`
|
||||
- Thread-safe message handling with automatic reconnection
|
||||
- State normalization (on/off/error)
|
||||
|
||||
2. **Camera Management** (`usda_vision_system/camera/`)
|
||||
- Automatic GigE camera discovery using python demo library
|
||||
- Periodic status monitoring (every 2 seconds)
|
||||
- Camera initialization and configuration management
|
||||
- **Discovered Cameras**:
|
||||
- Blower-Yield-Cam (192.168.1.165)
|
||||
- Cracker-Cam (192.168.1.167)
|
||||
|
||||
3. **Video Recording** (`usda_vision_system/camera/recorder.py`)
|
||||
- Automatic recording start/stop based on machine states
|
||||
- Timestamp-based file naming: `camera1_recording_20250726_143022.avi`
|
||||
- Configurable FPS, exposure, and gain settings
|
||||
- Thread-safe recording with proper cleanup
|
||||
|
||||
4. **Storage Management** (`usda_vision_system/storage/`)
|
||||
- Organized file storage under `./storage/camera1/` and `./storage/camera2/`
|
||||
- File indexing and metadata tracking
|
||||
- Automatic cleanup of old files
|
||||
- Storage statistics and integrity checking
|
||||
|
||||
5. **REST API Server** (`usda_vision_system/api/`)
|
||||
- FastAPI server on port 8000
|
||||
- Real-time WebSocket updates
|
||||
- Manual recording control endpoints
|
||||
- System status and monitoring endpoints
|
||||
|
||||
6. **Comprehensive Logging** (`usda_vision_system/core/logging_config.py`)
|
||||
- Colored console output
|
||||
- Rotating log files
|
||||
- Component-specific log levels
|
||||
- Performance monitoring and error tracking
|
||||
|
||||
## 🚀 How to Use
|
||||
|
||||
### Quick Start
|
||||
```bash
|
||||
# Run system tests
|
||||
python test_system.py
|
||||
|
||||
# Start the system
|
||||
python main.py
|
||||
|
||||
# Or use the startup script
|
||||
./start_system.sh
|
||||
```
|
||||
|
||||
### Configuration
|
||||
Edit `config.json` to customize:
|
||||
- MQTT broker settings
|
||||
- Camera configurations
|
||||
- Storage paths
|
||||
- System parameters
|
||||
|
||||
### API Access
|
||||
- System status: `http://localhost:8000/system/status`
|
||||
- Camera status: `http://localhost:8000/cameras`
|
||||
- Manual recording: `POST http://localhost:8000/cameras/camera1/start-recording`
|
||||
- Real-time updates: WebSocket at `ws://localhost:8000/ws`
|
||||
|
||||
## 📊 Test Results
|
||||
|
||||
All system tests passed successfully:
|
||||
- ✅ Module imports
|
||||
- ✅ Configuration loading
|
||||
- ✅ Camera discovery (found 2 cameras)
|
||||
- ✅ Storage setup
|
||||
- ✅ MQTT configuration
|
||||
- ✅ System initialization
|
||||
- ✅ API endpoints
|
||||
|
||||
## 🔧 System Behavior
|
||||
|
||||
### Automatic Recording Flow
|
||||
1. **Machine turns ON** → MQTT message received → Recording starts automatically
|
||||
2. **Machine turns OFF** → MQTT message received → Recording stops and saves file
|
||||
3. **Files saved** with timestamp: `camera1_recording_YYYYMMDD_HHMMSS.avi`
|
||||
|
||||
### Manual Control
|
||||
- Start/stop recording via API calls
|
||||
- Monitor system status in real-time
|
||||
- Check camera availability on demand
|
||||
|
||||
### Dashboard Integration
|
||||
The system is designed to integrate with your React + Vite + Tailwind + Supabase dashboard:
|
||||
- REST API for status queries
|
||||
- WebSocket for real-time updates
|
||||
- JSON responses for easy frontend consumption
|
||||
|
||||
## 📁 Project Structure
|
||||
|
||||
```
|
||||
usda_vision_system/
|
||||
├── core/ # Configuration, state management, events, logging
|
||||
├── mqtt/ # MQTT client and message handlers
|
||||
├── camera/ # Camera management, monitoring, recording
|
||||
├── storage/ # File organization and management
|
||||
├── api/ # FastAPI server and WebSocket support
|
||||
└── main.py # Application coordinator
|
||||
|
||||
Supporting Files:
|
||||
├── main.py # Entry point script
|
||||
├── config.json # System configuration
|
||||
├── test_system.py # Test suite
|
||||
├── start_system.sh # Startup script
|
||||
└── README_SYSTEM.md # Comprehensive documentation
|
||||
```
|
||||
|
||||
## 🎯 Key Features Delivered
|
||||
|
||||
- ✅ **Dual MQTT topic listening** for two machines
|
||||
- ✅ **Automatic camera recording** triggered by machine states
|
||||
- ✅ **GigE camera support** using python demo library
|
||||
- ✅ **Thread-safe multi-tasking** (MQTT + camera monitoring + recording)
|
||||
- ✅ **Timestamp-based file naming** in organized directories
|
||||
- ✅ **2-second camera status monitoring** with on-demand checks
|
||||
- ✅ **REST API and WebSocket** for dashboard integration
|
||||
- ✅ **Comprehensive logging** with error tracking
|
||||
- ✅ **Configuration management** via JSON
|
||||
- ✅ **Storage management** with cleanup capabilities
|
||||
- ✅ **Graceful startup/shutdown** with signal handling
|
||||
|
||||
## 🔮 Ready for Dashboard Integration
|
||||
|
||||
The system provides everything needed for your React dashboard:
|
||||
|
||||
```javascript
|
||||
// Example API usage
|
||||
const systemStatus = await fetch('http://localhost:8000/system/status');
|
||||
const cameras = await fetch('http://localhost:8000/cameras');
|
||||
|
||||
// WebSocket for real-time updates
|
||||
const ws = new WebSocket('ws://localhost:8000/ws');
|
||||
ws.onmessage = (event) => {
|
||||
const update = JSON.parse(event.data);
|
||||
// Handle real-time system updates
|
||||
};
|
||||
|
||||
// Manual recording control
|
||||
await fetch('http://localhost:8000/cameras/camera1/start-recording', {
|
||||
method: 'POST',
|
||||
headers: { 'Content-Type': 'application/json' },
|
||||
body: JSON.stringify({ camera_name: 'camera1' })
|
||||
});
|
||||
```
|
||||
|
||||
## 🎊 Next Steps
|
||||
|
||||
The system is production-ready! You can now:
|
||||
|
||||
1. **Deploy** the system on your target hardware
|
||||
2. **Integrate** with your existing React dashboard
|
||||
3. **Configure** MQTT topics and camera settings as needed
|
||||
4. **Monitor** system performance through logs and API endpoints
|
||||
5. **Extend** functionality as requirements evolve
|
||||
|
||||
The modular architecture makes it easy to add new features, cameras, or MQTT topics in the future.
|
||||
|
||||
---
|
||||
|
||||
**System Status**: ✅ **FULLY OPERATIONAL**
|
||||
**Test Results**: ✅ **ALL TESTS PASSING**
|
||||
**Cameras Detected**: ✅ **2 GIGE CAMERAS READY**
|
||||
**Ready for Production**: ✅ **YES**
|
||||
1
old tests/README.md
Normal file
1
old tests/README.md
Normal file
@@ -0,0 +1 @@
|
||||
# USDA-Vision-Cameras
|
||||
249
old tests/README_SYSTEM.md
Normal file
249
old tests/README_SYSTEM.md
Normal file
@@ -0,0 +1,249 @@
|
||||
# USDA Vision Camera System
|
||||
|
||||
A comprehensive system for monitoring machines via MQTT and automatically recording video from GigE cameras when machines are active.
|
||||
|
||||
## Overview
|
||||
|
||||
This system integrates MQTT machine monitoring with automated video recording from GigE cameras. When a machine turns on (detected via MQTT), the system automatically starts recording from the associated camera. When the machine turns off, recording stops and the video is saved with a timestamp.
|
||||
|
||||
## Features
|
||||
|
||||
- **MQTT Integration**: Listens to multiple machine state topics
|
||||
- **Automatic Recording**: Starts/stops recording based on machine states
|
||||
- **GigE Camera Support**: Uses the python demo library (mvsdk) for camera control
|
||||
- **Multi-threading**: Concurrent MQTT listening, camera monitoring, and recording
|
||||
- **REST API**: FastAPI server for dashboard integration
|
||||
- **WebSocket Support**: Real-time status updates
|
||||
- **Storage Management**: Organized file storage with cleanup capabilities
|
||||
- **Comprehensive Logging**: Detailed logging with rotation and error tracking
|
||||
- **Configuration Management**: JSON-based configuration system
|
||||
|
||||
## Architecture
|
||||
|
||||
```
|
||||
┌─────────────────┐ ┌─────────────────┐ ┌─────────────────┐
|
||||
│ MQTT Broker │ │ GigE Camera │ │ Dashboard │
|
||||
│ │ │ │ │ (React) │
|
||||
└─────────┬───────┘ └─────────┬───────┘ └─────────┬───────┘
|
||||
│ │ │
|
||||
│ Machine States │ Video Streams │ API Calls
|
||||
│ │ │
|
||||
┌─────────▼──────────────────────▼──────────────────────▼───────┐
|
||||
│ USDA Vision Camera System │
|
||||
├───────────────────────────────────────────────────────────────┤
|
||||
│ ┌─────────────┐ ┌─────────────┐ ┌─────────────┐ │
|
||||
│ │ MQTT Client │ │ Camera │ │ API Server │ │
|
||||
│ │ │ │ Manager │ │ │ │
|
||||
│ └─────────────┘ └─────────────┘ └─────────────┘ │
|
||||
│ ┌─────────────┐ ┌─────────────┐ ┌─────────────┐ │
|
||||
│ │ State │ │ Storage │ │ Event │ │
|
||||
│ │ Manager │ │ Manager │ │ System │ │
|
||||
│ └─────────────┘ └─────────────┘ └─────────────┘ │
|
||||
└───────────────────────────────────────────────────────────────┘
|
||||
```
|
||||
|
||||
## Installation
|
||||
|
||||
1. **Prerequisites**:
|
||||
- Python 3.11+
|
||||
- GigE cameras with python demo library
|
||||
- MQTT broker (e.g., Mosquitto)
|
||||
- uv package manager (recommended)
|
||||
|
||||
2. **Install Dependencies**:
|
||||
```bash
|
||||
uv sync
|
||||
```
|
||||
|
||||
3. **Setup Storage Directory**:
|
||||
```bash
|
||||
sudo mkdir -p /storage
|
||||
sudo chown $USER:$USER /storage
|
||||
```
|
||||
|
||||
## Configuration
|
||||
|
||||
Edit `config.json` to configure your system:
|
||||
|
||||
```json
|
||||
{
|
||||
"mqtt": {
|
||||
"broker_host": "192.168.1.110",
|
||||
"broker_port": 1883,
|
||||
"topics": {
|
||||
"vibratory_conveyor": "vision/vibratory_conveyor/state",
|
||||
"blower_separator": "vision/blower_separator/state"
|
||||
}
|
||||
},
|
||||
"cameras": [
|
||||
{
|
||||
"name": "camera1",
|
||||
"machine_topic": "vibratory_conveyor",
|
||||
"storage_path": "/storage/camera1",
|
||||
"exposure_ms": 1.0,
|
||||
"gain": 3.5,
|
||||
"target_fps": 3.0,
|
||||
"enabled": true
|
||||
}
|
||||
]
|
||||
}
|
||||
```
|
||||
|
||||
## Usage
|
||||
|
||||
### Basic Usage
|
||||
|
||||
1. **Start the System**:
|
||||
```bash
|
||||
python main.py
|
||||
```
|
||||
|
||||
2. **With Custom Config**:
|
||||
```bash
|
||||
python main.py --config my_config.json
|
||||
```
|
||||
|
||||
3. **Debug Mode**:
|
||||
```bash
|
||||
python main.py --log-level DEBUG
|
||||
```
|
||||
|
||||
### API Endpoints
|
||||
|
||||
The system provides a REST API on port 8000:
|
||||
|
||||
- `GET /system/status` - Overall system status
|
||||
- `GET /cameras` - All camera statuses
|
||||
- `GET /machines` - All machine states
|
||||
- `POST /cameras/{name}/start-recording` - Manual recording start
|
||||
- `POST /cameras/{name}/stop-recording` - Manual recording stop
|
||||
- `GET /storage/stats` - Storage statistics
|
||||
- `WebSocket /ws` - Real-time updates
|
||||
|
||||
### Dashboard Integration
|
||||
|
||||
The system is designed to integrate with your existing React + Vite + Tailwind + Supabase dashboard:
|
||||
|
||||
1. **API Integration**: Use the REST endpoints to display system status
|
||||
2. **WebSocket**: Connect to `/ws` for real-time updates
|
||||
3. **Supabase Storage**: Store recording metadata and system logs
|
||||
|
||||
## File Organization
|
||||
|
||||
```
|
||||
/storage/
|
||||
├── camera1/
|
||||
│ ├── camera1_recording_20250726_143022.avi
|
||||
│ └── camera1_recording_20250726_143155.avi
|
||||
├── camera2/
|
||||
│ ├── camera2_recording_20250726_143025.avi
|
||||
│ └── camera2_recording_20250726_143158.avi
|
||||
└── file_index.json
|
||||
```
|
||||
|
||||
## Monitoring and Logging
|
||||
|
||||
### Log Files
|
||||
|
||||
- `usda_vision_system.log` - Main system log (rotated)
|
||||
- Console output with colored formatting
|
||||
- Component-specific log levels
|
||||
|
||||
### Performance Monitoring
|
||||
|
||||
The system includes built-in performance monitoring:
|
||||
- Startup times
|
||||
- Recording session metrics
|
||||
- MQTT message processing rates
|
||||
- Camera status check intervals
|
||||
|
||||
### Error Tracking
|
||||
|
||||
Comprehensive error tracking with:
|
||||
- Error counts per component
|
||||
- Detailed error context
|
||||
- Automatic recovery attempts
|
||||
|
||||
## Troubleshooting
|
||||
|
||||
### Common Issues
|
||||
|
||||
1. **Camera Not Found**:
|
||||
- Check camera connections
|
||||
- Verify python demo library installation
|
||||
- Run camera discovery: Check logs for enumeration results
|
||||
|
||||
2. **MQTT Connection Failed**:
|
||||
- Verify broker IP and port
|
||||
- Check network connectivity
|
||||
- Verify credentials if authentication is enabled
|
||||
|
||||
3. **Recording Fails**:
|
||||
- Check storage permissions
|
||||
- Verify available disk space
|
||||
- Check camera initialization logs
|
||||
|
||||
4. **API Server Won't Start**:
|
||||
- Check if port 8000 is available
|
||||
- Verify FastAPI dependencies
|
||||
- Check firewall settings
|
||||
|
||||
### Debug Commands
|
||||
|
||||
```bash
|
||||
# Check system status
|
||||
curl http://localhost:8000/system/status
|
||||
|
||||
# Check camera status
|
||||
curl http://localhost:8000/cameras
|
||||
|
||||
# Manual recording start
|
||||
curl -X POST http://localhost:8000/cameras/camera1/start-recording \
|
||||
-H "Content-Type: application/json" \
|
||||
-d '{"camera_name": "camera1"}'
|
||||
```
|
||||
|
||||
## Development
|
||||
|
||||
### Project Structure
|
||||
|
||||
```
|
||||
usda_vision_system/
|
||||
├── core/ # Core functionality
|
||||
├── mqtt/ # MQTT client and handlers
|
||||
├── camera/ # Camera management and recording
|
||||
├── storage/ # File management
|
||||
├── api/ # FastAPI server
|
||||
└── main.py # Application coordinator
|
||||
```
|
||||
|
||||
### Adding New Features
|
||||
|
||||
1. **New Camera Type**: Extend `camera/recorder.py`
|
||||
2. **New MQTT Topics**: Update `config.json` and `mqtt/handlers.py`
|
||||
3. **New API Endpoints**: Add to `api/server.py`
|
||||
4. **New Events**: Define in `core/events.py`
|
||||
|
||||
### Testing
|
||||
|
||||
```bash
|
||||
# Run basic system test
|
||||
python -c "from usda_vision_system import USDAVisionSystem; s = USDAVisionSystem(); print('OK')"
|
||||
|
||||
# Test MQTT connection
|
||||
python -c "from usda_vision_system.mqtt.client import MQTTClient; # ... test code"
|
||||
|
||||
# Test camera discovery
|
||||
python -c "import sys; sys.path.append('python demo'); import mvsdk; print(len(mvsdk.CameraEnumerateDevice()))"
|
||||
```
|
||||
|
||||
## License
|
||||
|
||||
This project is developed for USDA research purposes.
|
||||
|
||||
## Support
|
||||
|
||||
For issues and questions:
|
||||
1. Check the logs in `usda_vision_system.log`
|
||||
2. Review the troubleshooting section
|
||||
3. Check API status at `http://localhost:8000/health`
|
||||
190
old tests/TIMEZONE_SETUP_SUMMARY.md
Normal file
190
old tests/TIMEZONE_SETUP_SUMMARY.md
Normal file
@@ -0,0 +1,190 @@
|
||||
# Time Synchronization Setup - Atlanta, Georgia
|
||||
|
||||
## ✅ Time Synchronization Complete!
|
||||
|
||||
The USDA Vision Camera System has been configured for proper time synchronization with Atlanta, Georgia (Eastern Time Zone).
|
||||
|
||||
## 🕐 What Was Implemented
|
||||
|
||||
### System-Level Time Configuration
|
||||
- **Timezone**: Set to `America/New_York` (Eastern Time)
|
||||
- **Current Status**: Eastern Daylight Time (EDT, UTC-4)
|
||||
- **NTP Sync**: Configured with multiple reliable time servers
|
||||
- **Hardware Clock**: Synchronized with system time
|
||||
|
||||
### Application-Level Timezone Support
|
||||
- **Timezone-Aware Timestamps**: All recordings use Atlanta time
|
||||
- **Automatic DST Handling**: Switches between EST/EDT automatically
|
||||
- **Time Sync Monitoring**: Built-in time synchronization checking
|
||||
- **Consistent Formatting**: Standardized timestamp formats throughout
|
||||
|
||||
## 🔧 Key Features
|
||||
|
||||
### 1. Automatic Time Synchronization
|
||||
```bash
|
||||
# NTP servers configured:
|
||||
- time.nist.gov (NIST atomic clock)
|
||||
- pool.ntp.org (NTP pool)
|
||||
- time.google.com (Google time)
|
||||
- time.cloudflare.com (Cloudflare time)
|
||||
```
|
||||
|
||||
### 2. Timezone-Aware Recording Filenames
|
||||
```
|
||||
Example: camera1_recording_20250725_213241.avi
|
||||
Format: {camera}_{type}_{YYYYMMDD_HHMMSS}.avi
|
||||
Time: Atlanta local time (EDT/EST)
|
||||
```
|
||||
|
||||
### 3. Time Verification Tools
|
||||
- **Startup Check**: Automatic time sync verification on system start
|
||||
- **Manual Check**: `python check_time.py` for on-demand verification
|
||||
- **API Integration**: Time sync status available via REST API
|
||||
|
||||
### 4. Comprehensive Logging
|
||||
```
|
||||
=== TIME SYNCHRONIZATION STATUS ===
|
||||
System time: 2025-07-25 21:32:41 EDT
|
||||
Timezone: EDT (-0400)
|
||||
Daylight Saving: Yes
|
||||
Sync status: synchronized
|
||||
Time difference: 0.10 seconds
|
||||
=====================================
|
||||
```
|
||||
|
||||
## 🚀 Usage
|
||||
|
||||
### Automatic Operation
|
||||
The system automatically:
|
||||
- Uses Atlanta time for all timestamps
|
||||
- Handles daylight saving time transitions
|
||||
- Monitors time synchronization status
|
||||
- Logs time-related events
|
||||
|
||||
### Manual Verification
|
||||
```bash
|
||||
# Check time synchronization
|
||||
python check_time.py
|
||||
|
||||
# Test timezone functions
|
||||
python test_timezone.py
|
||||
|
||||
# View system time status
|
||||
timedatectl status
|
||||
```
|
||||
|
||||
### API Endpoints
|
||||
```bash
|
||||
# System status includes time info
|
||||
curl http://localhost:8000/system/status
|
||||
|
||||
# Example response includes:
|
||||
{
|
||||
"system_started": true,
|
||||
"uptime_seconds": 3600,
|
||||
"timestamp": "2025-07-25T21:32:41-04:00"
|
||||
}
|
||||
```
|
||||
|
||||
## 📊 Current Status
|
||||
|
||||
### Time Synchronization
|
||||
- ✅ **System Timezone**: America/New_York (EDT)
|
||||
- ✅ **NTP Sync**: Active and synchronized
|
||||
- ✅ **Time Accuracy**: Within 0.1 seconds of atomic time
|
||||
- ✅ **DST Support**: Automatic EST/EDT switching
|
||||
|
||||
### Application Integration
|
||||
- ✅ **Recording Timestamps**: Atlanta time zone
|
||||
- ✅ **Log Timestamps**: Timezone-aware logging
|
||||
- ✅ **API Responses**: ISO format with timezone
|
||||
- ✅ **File Naming**: Consistent Atlanta time format
|
||||
|
||||
### Monitoring
|
||||
- ✅ **Startup Verification**: Time sync checked on boot
|
||||
- ✅ **Continuous Monitoring**: Built-in sync status tracking
|
||||
- ✅ **Error Detection**: Alerts for time drift issues
|
||||
- ✅ **Manual Tools**: On-demand verification scripts
|
||||
|
||||
## 🔍 Technical Details
|
||||
|
||||
### Timezone Configuration
|
||||
```json
|
||||
{
|
||||
"system": {
|
||||
"timezone": "America/New_York"
|
||||
}
|
||||
}
|
||||
```
|
||||
|
||||
### Time Sources
|
||||
1. **Primary**: NIST atomic clock (time.nist.gov)
|
||||
2. **Secondary**: NTP pool servers (pool.ntp.org)
|
||||
3. **Backup**: Google/Cloudflare time servers
|
||||
4. **Fallback**: Local system clock
|
||||
|
||||
### File Naming Convention
|
||||
```
|
||||
Pattern: {camera_name}_recording_{YYYYMMDD_HHMMSS}.avi
|
||||
Example: camera1_recording_20250725_213241.avi
|
||||
Timezone: Always Atlanta local time (EST/EDT)
|
||||
```
|
||||
|
||||
## 🎯 Benefits
|
||||
|
||||
### For Operations
|
||||
- **Consistent Timestamps**: All recordings use Atlanta time
|
||||
- **Easy Correlation**: Timestamps match local business hours
|
||||
- **Automatic DST**: No manual timezone adjustments needed
|
||||
- **Reliable Sync**: Multiple time sources ensure accuracy
|
||||
|
||||
### For Analysis
|
||||
- **Local Time Context**: Recordings timestamped in business timezone
|
||||
- **Accurate Sequencing**: Precise timing for event correlation
|
||||
- **Standard Format**: Consistent naming across all recordings
|
||||
- **Audit Trail**: Complete time synchronization logging
|
||||
|
||||
### For Integration
|
||||
- **Dashboard Ready**: Timezone-aware API responses
|
||||
- **Database Compatible**: ISO format timestamps with timezone
|
||||
- **Log Analysis**: Structured time information in logs
|
||||
- **Monitoring**: Built-in time sync health checks
|
||||
|
||||
## 🔧 Maintenance
|
||||
|
||||
### Regular Checks
|
||||
The system automatically:
|
||||
- Verifies time sync on startup
|
||||
- Logs time synchronization status
|
||||
- Monitors for time drift
|
||||
- Alerts on sync failures
|
||||
|
||||
### Manual Maintenance
|
||||
```bash
|
||||
# Force time sync
|
||||
sudo systemctl restart systemd-timesyncd
|
||||
|
||||
# Check NTP status
|
||||
timedatectl show-timesync --all
|
||||
|
||||
# Verify timezone
|
||||
timedatectl status
|
||||
```
|
||||
|
||||
## 📈 Next Steps
|
||||
|
||||
The time synchronization is now fully operational. The system will:
|
||||
|
||||
1. **Automatically maintain** accurate Atlanta time
|
||||
2. **Generate timestamped recordings** with local time
|
||||
3. **Monitor sync status** and alert on issues
|
||||
4. **Provide timezone-aware** API responses for dashboard integration
|
||||
|
||||
All recording files will now have accurate Atlanta timestamps, making it easy to correlate with local business operations and machine schedules.
|
||||
|
||||
---
|
||||
|
||||
**Time Sync Status**: ✅ **SYNCHRONIZED**
|
||||
**Timezone**: ✅ **America/New_York (EDT)**
|
||||
**Accuracy**: ✅ **±0.1 seconds**
|
||||
**Ready for Production**: ✅ **YES**
|
||||
191
old tests/VIDEO_RECORDER_README.md
Normal file
191
old tests/VIDEO_RECORDER_README.md
Normal file
@@ -0,0 +1,191 @@
|
||||
# Camera Video Recorder
|
||||
|
||||
A Python script for recording videos from GigE cameras using the provided SDK with custom exposure and gain settings.
|
||||
|
||||
## Features
|
||||
|
||||
- **List all available cameras** - Automatically detects and displays all connected cameras
|
||||
- **Custom camera settings** - Set exposure time to 1ms and gain to 3.5x (or custom values)
|
||||
- **Video recording** - Record videos in AVI format with timestamp filenames
|
||||
- **Live preview** - Test camera functionality with live preview mode
|
||||
- **Interactive menu** - User-friendly menu system for all operations
|
||||
- **Automatic cleanup** - Proper resource management and cleanup
|
||||
|
||||
## Requirements
|
||||
|
||||
- Python 3.x
|
||||
- OpenCV (`cv2`)
|
||||
- NumPy
|
||||
- Camera SDK (mvsdk) - included in `python demo` directory
|
||||
- GigE camera connected to the system
|
||||
|
||||
## Installation
|
||||
|
||||
1. Ensure your GigE camera is connected and properly configured
|
||||
2. Make sure the `python demo` directory with `mvsdk.py` is present
|
||||
3. Install required Python packages:
|
||||
```bash
|
||||
pip install opencv-python numpy
|
||||
```
|
||||
|
||||
## Usage
|
||||
|
||||
### Basic Usage
|
||||
|
||||
Run the script:
|
||||
```bash
|
||||
python camera_video_recorder.py
|
||||
```
|
||||
|
||||
The script will:
|
||||
1. Display a welcome message and feature overview
|
||||
2. List all available cameras
|
||||
3. Let you select a camera (if multiple are available)
|
||||
4. Allow you to set custom exposure and gain values
|
||||
5. Present an interactive menu with options
|
||||
|
||||
### Menu Options
|
||||
|
||||
1. **Start Recording** - Begin video recording with timestamp filename
|
||||
2. **List Camera Info** - Display detailed camera information
|
||||
3. **Test Camera (Live Preview)** - View live camera feed without recording
|
||||
4. **Exit** - Clean up and exit the program
|
||||
|
||||
### Default Settings
|
||||
|
||||
- **Exposure Time**: 1.0ms (1000 microseconds)
|
||||
- **Gain**: 3.5x
|
||||
- **Video Format**: AVI with XVID codec
|
||||
- **Frame Rate**: 30 FPS
|
||||
- **Output Directory**: `videos/` (created automatically)
|
||||
|
||||
### Recording Controls
|
||||
|
||||
- **Start Recording**: Select option 1 from the menu
|
||||
- **Stop Recording**: Press 'q' in the preview window
|
||||
- **Video Files**: Saved as `videos/camera_recording_YYYYMMDD_HHMMSS.avi`
|
||||
|
||||
## File Structure
|
||||
|
||||
```
|
||||
camera_video_recorder.py # Main script
|
||||
python demo/
|
||||
mvsdk.py # Camera SDK wrapper
|
||||
(other demo files)
|
||||
videos/ # Output directory (created automatically)
|
||||
camera_recording_*.avi # Recorded video files
|
||||
```
|
||||
|
||||
## Script Features
|
||||
|
||||
### CameraVideoRecorder Class
|
||||
|
||||
- `list_cameras()` - Enumerate and display available cameras
|
||||
- `initialize_camera()` - Set up camera with custom exposure and gain
|
||||
- `start_recording()` - Initialize video writer and begin recording
|
||||
- `stop_recording()` - Stop recording and save video file
|
||||
- `record_loop()` - Main recording loop with live preview
|
||||
- `cleanup()` - Proper resource cleanup
|
||||
|
||||
### Key Functions
|
||||
|
||||
- **Camera Detection**: Automatically finds all connected GigE cameras
|
||||
- **Settings Validation**: Checks and clamps exposure/gain values to camera limits
|
||||
- **Frame Processing**: Handles both monochrome and color cameras
|
||||
- **Windows Compatibility**: Handles frame flipping for Windows systems
|
||||
- **Error Handling**: Comprehensive error handling and user feedback
|
||||
|
||||
## Example Output
|
||||
|
||||
```
|
||||
Camera Video Recorder
|
||||
====================
|
||||
This script allows you to:
|
||||
- List all available cameras
|
||||
- Record videos with custom exposure (1ms) and gain (3.5x) settings
|
||||
- Save videos with timestamps
|
||||
- Stop recording anytime with 'q' key
|
||||
|
||||
Found 1 camera(s):
|
||||
0: GigE Camera Model (GigE) - SN: 12345678
|
||||
|
||||
Using camera: GigE Camera Model
|
||||
|
||||
Camera Settings:
|
||||
Enter exposure time in ms (default 1.0): 1.0
|
||||
Enter gain value (default 3.5): 3.5
|
||||
|
||||
Initializing camera with:
|
||||
- Exposure: 1.0ms
|
||||
- Gain: 3.5x
|
||||
|
||||
Camera type: Color
|
||||
Set exposure time: 1000.0μs
|
||||
Set analog gain: 3.50x (range: 1.00 - 16.00)
|
||||
Camera started successfully
|
||||
|
||||
==================================================
|
||||
Camera Video Recorder Menu
|
||||
==================================================
|
||||
1. Start Recording
|
||||
2. List Camera Info
|
||||
3. Test Camera (Live Preview)
|
||||
4. Exit
|
||||
|
||||
Select option (1-4): 1
|
||||
|
||||
Started recording to: videos/camera_recording_20241223_143022.avi
|
||||
Frame size: (1920, 1080), FPS: 30.0
|
||||
Press 'q' to stop recording...
|
||||
Recording... Press 'q' in the preview window to stop
|
||||
|
||||
Recording stopped!
|
||||
Saved: videos/camera_recording_20241223_143022.avi
|
||||
Frames recorded: 450
|
||||
Duration: 15.2 seconds
|
||||
Average FPS: 29.6
|
||||
```
|
||||
|
||||
## Troubleshooting
|
||||
|
||||
### Common Issues
|
||||
|
||||
1. **"No cameras found!"**
|
||||
- Check camera connection
|
||||
- Verify camera power
|
||||
- Ensure network configuration for GigE cameras
|
||||
|
||||
2. **"SDK initialization failed"**
|
||||
- Verify `python demo/mvsdk.py` exists
|
||||
- Check camera drivers are installed
|
||||
|
||||
3. **"Camera initialization failed"**
|
||||
- Camera may be in use by another application
|
||||
- Try disconnecting and reconnecting the camera
|
||||
|
||||
4. **Recording issues**
|
||||
- Ensure sufficient disk space
|
||||
- Check write permissions in the output directory
|
||||
|
||||
### Performance Tips
|
||||
|
||||
- Close other applications using the camera
|
||||
- Ensure adequate system resources (CPU, RAM)
|
||||
- Use SSD storage for better write performance
|
||||
- Adjust frame rate if experiencing dropped frames
|
||||
|
||||
## Customization
|
||||
|
||||
You can modify the script to:
|
||||
- Change video codec (currently XVID)
|
||||
- Adjust target frame rate
|
||||
- Modify output filename format
|
||||
- Add additional camera settings
|
||||
- Change preview window size
|
||||
|
||||
## Notes
|
||||
|
||||
- Videos are saved in the `videos/` directory with timestamp filenames
|
||||
- The script handles both monochrome and color cameras automatically
|
||||
- Frame flipping is handled automatically for Windows systems
|
||||
- All resources are properly cleaned up on exit
|
||||
291
old tests/camera_capture.py
Normal file
291
old tests/camera_capture.py
Normal file
@@ -0,0 +1,291 @@
|
||||
# coding=utf-8
|
||||
"""
|
||||
Simple GigE Camera Capture Script
|
||||
Captures 10 images every 200 milliseconds and saves them to the images directory.
|
||||
"""
|
||||
|
||||
import os
|
||||
import time
|
||||
import numpy as np
|
||||
import cv2
|
||||
import platform
|
||||
from datetime import datetime
|
||||
import sys
|
||||
|
||||
sys.path.append("./python demo")
|
||||
import mvsdk
|
||||
|
||||
|
||||
def is_camera_ready_for_capture():
|
||||
"""
|
||||
Check if camera is ready for capture.
|
||||
Returns: (ready: bool, message: str, camera_info: object or None)
|
||||
"""
|
||||
try:
|
||||
# Initialize SDK
|
||||
mvsdk.CameraSdkInit(1)
|
||||
|
||||
# Enumerate cameras
|
||||
DevList = mvsdk.CameraEnumerateDevice()
|
||||
if len(DevList) < 1:
|
||||
return False, "No cameras found", None
|
||||
|
||||
DevInfo = DevList[0]
|
||||
|
||||
# Check if already opened
|
||||
try:
|
||||
if mvsdk.CameraIsOpened(DevInfo):
|
||||
return False, f"Camera '{DevInfo.GetFriendlyName()}' is already opened by another process", DevInfo
|
||||
except:
|
||||
pass # Some cameras might not support this check
|
||||
|
||||
# Try to initialize
|
||||
try:
|
||||
hCamera = mvsdk.CameraInit(DevInfo, -1, -1)
|
||||
|
||||
# Quick capture test
|
||||
try:
|
||||
# Basic setup
|
||||
mvsdk.CameraSetTriggerMode(hCamera, 0)
|
||||
mvsdk.CameraPlay(hCamera)
|
||||
|
||||
# Try to get one frame with short timeout
|
||||
pRawData, FrameHead = mvsdk.CameraGetImageBuffer(hCamera, 500) # 0.5 second timeout
|
||||
mvsdk.CameraReleaseImageBuffer(hCamera, pRawData)
|
||||
|
||||
# Success - close and return
|
||||
mvsdk.CameraUnInit(hCamera)
|
||||
return True, f"Camera '{DevInfo.GetFriendlyName()}' is ready for capture", DevInfo
|
||||
|
||||
except mvsdk.CameraException as e:
|
||||
mvsdk.CameraUnInit(hCamera)
|
||||
if e.error_code == mvsdk.CAMERA_STATUS_TIME_OUT:
|
||||
return False, "Camera timeout - may be busy or not streaming properly", DevInfo
|
||||
else:
|
||||
return False, f"Camera capture test failed: {e.message}", DevInfo
|
||||
|
||||
except mvsdk.CameraException as e:
|
||||
if e.error_code == mvsdk.CAMERA_STATUS_DEVICE_IS_OPENED:
|
||||
return False, f"Camera '{DevInfo.GetFriendlyName()}' is already in use", DevInfo
|
||||
elif e.error_code == mvsdk.CAMERA_STATUS_ACCESS_DENY:
|
||||
return False, f"Access denied to camera '{DevInfo.GetFriendlyName()}'", DevInfo
|
||||
else:
|
||||
return False, f"Camera initialization failed: {e.message}", DevInfo
|
||||
|
||||
except Exception as e:
|
||||
return False, f"Camera check failed: {str(e)}", None
|
||||
|
||||
|
||||
def get_camera_ranges(hCamera):
|
||||
"""
|
||||
Get the available ranges for camera settings
|
||||
"""
|
||||
try:
|
||||
# Get exposure time range
|
||||
exp_min, exp_max, exp_step = mvsdk.CameraGetExposureTimeRange(hCamera)
|
||||
print(f"Exposure time range: {exp_min:.1f} - {exp_max:.1f} μs (step: {exp_step:.1f})")
|
||||
|
||||
# Get analog gain range
|
||||
gain_min, gain_max, gain_step = mvsdk.CameraGetAnalogGainXRange(hCamera)
|
||||
print(f"Analog gain range: {gain_min:.2f} - {gain_max:.2f}x (step: {gain_step:.3f})")
|
||||
|
||||
return (exp_min, exp_max, exp_step), (gain_min, gain_max, gain_step)
|
||||
except Exception as e:
|
||||
print(f"Could not get camera ranges: {e}")
|
||||
return None, None
|
||||
|
||||
|
||||
def capture_images(exposure_time_us=2000, analog_gain=1.0):
|
||||
"""
|
||||
Main function to capture images from GigE camera
|
||||
|
||||
Parameters:
|
||||
- exposure_time_us: Exposure time in microseconds (default: 2000 = 2ms)
|
||||
- analog_gain: Analog gain multiplier (default: 1.0)
|
||||
"""
|
||||
# Check if camera is ready for capture
|
||||
print("Checking camera availability...")
|
||||
ready, message, camera_info = is_camera_ready_for_capture()
|
||||
|
||||
if not ready:
|
||||
print(f"❌ Camera not ready: {message}")
|
||||
print("\nPossible solutions:")
|
||||
print("- Close any other camera applications (preview software, etc.)")
|
||||
print("- Check camera connection and power")
|
||||
print("- Wait a moment and try again")
|
||||
return False
|
||||
|
||||
print(f"✅ {message}")
|
||||
|
||||
# Initialize SDK (already done in status check, but ensure it's ready)
|
||||
try:
|
||||
mvsdk.CameraSdkInit(1) # Initialize SDK with English language
|
||||
except Exception as e:
|
||||
print(f"SDK initialization failed: {e}")
|
||||
return False
|
||||
|
||||
# Enumerate cameras
|
||||
DevList = mvsdk.CameraEnumerateDevice()
|
||||
nDev = len(DevList)
|
||||
|
||||
if nDev < 1:
|
||||
print("No camera was found!")
|
||||
return False
|
||||
|
||||
print(f"Found {nDev} camera(s):")
|
||||
for i, DevInfo in enumerate(DevList):
|
||||
print(f"{i}: {DevInfo.GetFriendlyName()} {DevInfo.GetPortType()}")
|
||||
|
||||
# Select camera (use first one if only one available)
|
||||
camera_index = 0 if nDev == 1 else int(input("Select camera index: "))
|
||||
DevInfo = DevList[camera_index]
|
||||
print(f"Selected camera: {DevInfo.GetFriendlyName()}")
|
||||
|
||||
# Initialize camera
|
||||
hCamera = 0
|
||||
try:
|
||||
hCamera = mvsdk.CameraInit(DevInfo, -1, -1)
|
||||
print("Camera initialized successfully")
|
||||
except mvsdk.CameraException as e:
|
||||
print(f"CameraInit Failed({e.error_code}): {e.message}")
|
||||
return False
|
||||
|
||||
try:
|
||||
# Get camera capabilities
|
||||
cap = mvsdk.CameraGetCapability(hCamera)
|
||||
|
||||
# Check if it's a mono or color camera
|
||||
monoCamera = cap.sIspCapacity.bMonoSensor != 0
|
||||
print(f"Camera type: {'Monochrome' if monoCamera else 'Color'}")
|
||||
|
||||
# Get camera ranges
|
||||
exp_range, gain_range = get_camera_ranges(hCamera)
|
||||
|
||||
# Set output format
|
||||
if monoCamera:
|
||||
mvsdk.CameraSetIspOutFormat(hCamera, mvsdk.CAMERA_MEDIA_TYPE_MONO8)
|
||||
else:
|
||||
mvsdk.CameraSetIspOutFormat(hCamera, mvsdk.CAMERA_MEDIA_TYPE_BGR8)
|
||||
|
||||
# Set camera to continuous capture mode
|
||||
mvsdk.CameraSetTriggerMode(hCamera, 0)
|
||||
|
||||
# Set manual exposure with improved control
|
||||
mvsdk.CameraSetAeState(hCamera, 0) # Disable auto exposure
|
||||
|
||||
# Clamp exposure time to valid range
|
||||
if exp_range:
|
||||
exp_min, exp_max, exp_step = exp_range
|
||||
exposure_time_us = max(exp_min, min(exp_max, exposure_time_us))
|
||||
|
||||
mvsdk.CameraSetExposureTime(hCamera, exposure_time_us)
|
||||
print(f"Set exposure time: {exposure_time_us/1000:.1f}ms")
|
||||
|
||||
# Set analog gain
|
||||
if gain_range:
|
||||
gain_min, gain_max, gain_step = gain_range
|
||||
analog_gain = max(gain_min, min(gain_max, analog_gain))
|
||||
|
||||
try:
|
||||
mvsdk.CameraSetAnalogGainX(hCamera, analog_gain)
|
||||
print(f"Set analog gain: {analog_gain:.2f}x")
|
||||
except Exception as e:
|
||||
print(f"Could not set analog gain: {e}")
|
||||
|
||||
# Start camera
|
||||
mvsdk.CameraPlay(hCamera)
|
||||
print("Camera started")
|
||||
|
||||
# Calculate frame buffer size
|
||||
FrameBufferSize = cap.sResolutionRange.iWidthMax * cap.sResolutionRange.iHeightMax * (1 if monoCamera else 3)
|
||||
|
||||
# Allocate frame buffer
|
||||
pFrameBuffer = mvsdk.CameraAlignMalloc(FrameBufferSize, 16)
|
||||
|
||||
# Create images directory if it doesn't exist
|
||||
if not os.path.exists("images"):
|
||||
os.makedirs("images")
|
||||
|
||||
print("Starting image capture...")
|
||||
print("Capturing 10 images with 200ms intervals...")
|
||||
|
||||
# Capture 10 images
|
||||
for i in range(10):
|
||||
try:
|
||||
# Get image from camera (timeout: 2000ms)
|
||||
pRawData, FrameHead = mvsdk.CameraGetImageBuffer(hCamera, 2000)
|
||||
|
||||
# Process the raw image data
|
||||
mvsdk.CameraImageProcess(hCamera, pRawData, pFrameBuffer, FrameHead)
|
||||
|
||||
# Release the raw data buffer
|
||||
mvsdk.CameraReleaseImageBuffer(hCamera, pRawData)
|
||||
|
||||
# Handle Windows image flip (images are upside down on Windows)
|
||||
if platform.system() == "Windows":
|
||||
mvsdk.CameraFlipFrameBuffer(pFrameBuffer, FrameHead, 1)
|
||||
|
||||
# Convert to numpy array for OpenCV
|
||||
frame_data = (mvsdk.c_ubyte * FrameHead.uBytes).from_address(pFrameBuffer)
|
||||
frame = np.frombuffer(frame_data, dtype=np.uint8)
|
||||
|
||||
# Reshape based on camera type
|
||||
if FrameHead.uiMediaType == mvsdk.CAMERA_MEDIA_TYPE_MONO8:
|
||||
frame = frame.reshape((FrameHead.iHeight, FrameHead.iWidth))
|
||||
else:
|
||||
frame = frame.reshape((FrameHead.iHeight, FrameHead.iWidth, 3))
|
||||
|
||||
# Generate filename with timestamp
|
||||
timestamp = datetime.now().strftime("%Y%m%d_%H%M%S_%f")[:-3] # milliseconds
|
||||
filename = f"images/image_{i+1:02d}_{timestamp}.jpg"
|
||||
|
||||
# Save image using OpenCV
|
||||
success = cv2.imwrite(filename, frame)
|
||||
|
||||
if success:
|
||||
print(f"Image {i+1}/10 saved: {filename} ({FrameHead.iWidth}x{FrameHead.iHeight})")
|
||||
else:
|
||||
print(f"Failed to save image {i+1}/10")
|
||||
|
||||
# Wait 200ms before next capture (except for the last image)
|
||||
if i < 9:
|
||||
time.sleep(0.2)
|
||||
|
||||
except mvsdk.CameraException as e:
|
||||
print(f"Failed to capture image {i+1}/10 ({e.error_code}): {e.message}")
|
||||
continue
|
||||
|
||||
print("Image capture completed!")
|
||||
|
||||
# Cleanup
|
||||
mvsdk.CameraAlignFree(pFrameBuffer)
|
||||
|
||||
finally:
|
||||
# Close camera
|
||||
mvsdk.CameraUnInit(hCamera)
|
||||
print("Camera closed")
|
||||
|
||||
return True
|
||||
|
||||
|
||||
if __name__ == "__main__":
|
||||
print("GigE Camera Image Capture Script")
|
||||
print("=" * 40)
|
||||
print("Note: If images are overexposed, you can adjust the exposure settings:")
|
||||
print("- Lower exposure_time_us for darker images (e.g., 1000-5000)")
|
||||
print("- Lower analog_gain for less amplification (e.g., 0.5-2.0)")
|
||||
print()
|
||||
|
||||
# for cracker
|
||||
# You can adjust these values to fix overexposure:
|
||||
success = capture_images(exposure_time_us=6000, analog_gain=16.0) # 2ms exposure (much lower than default 30ms) # 1x gain (no amplification)
|
||||
# for blower
|
||||
success = capture_images(exposure_time_us=1000, analog_gain=3.5) # 2ms exposure (much lower than default 30ms) # 1x gain (no amplification)
|
||||
|
||||
if success:
|
||||
print("\nCapture completed successfully!")
|
||||
print("Images saved in the 'images' directory")
|
||||
else:
|
||||
print("\nCapture failed!")
|
||||
|
||||
input("Press Enter to exit...")
|
||||
607
old tests/camera_status_test.ipynb
Normal file
607
old tests/camera_status_test.ipynb
Normal file
@@ -0,0 +1,607 @@
|
||||
{
|
||||
"cells": [
|
||||
{
|
||||
"cell_type": "markdown",
|
||||
"id": "intro",
|
||||
"metadata": {},
|
||||
"source": [
|
||||
"# Camera Status and Availability Testing\n",
|
||||
"\n",
|
||||
"This notebook tests various methods to check camera status and availability before attempting to capture images.\n",
|
||||
"\n",
|
||||
"## Key Functions to Test:\n",
|
||||
"- `CameraIsOpened()` - Check if camera is already opened by another process\n",
|
||||
"- `CameraInit()` - Try to initialize and catch specific error codes\n",
|
||||
"- `CameraGetImageBuffer()` - Test actual image capture with timeout\n",
|
||||
"- Error code analysis for different failure scenarios"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": 1,
|
||||
"id": "imports",
|
||||
"metadata": {},
|
||||
"outputs": [
|
||||
{
|
||||
"name": "stdout",
|
||||
"output_type": "stream",
|
||||
"text": [
|
||||
"Libraries imported successfully!\n",
|
||||
"Platform: Linux\n"
|
||||
]
|
||||
}
|
||||
],
|
||||
"source": [
|
||||
"# Import required libraries\n",
|
||||
"import os\n",
|
||||
"import sys\n",
|
||||
"import time\n",
|
||||
"import numpy as np\n",
|
||||
"import cv2\n",
|
||||
"import platform\n",
|
||||
"from datetime import datetime\n",
|
||||
"\n",
|
||||
"# Add the python demo directory to path to import mvsdk\n",
|
||||
"sys.path.append('./python demo')\n",
|
||||
"import mvsdk\n",
|
||||
"\n",
|
||||
"print(\"Libraries imported successfully!\")\n",
|
||||
"print(f\"Platform: {platform.system()}\")"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": 2,
|
||||
"id": "error-codes",
|
||||
"metadata": {},
|
||||
"outputs": [
|
||||
{
|
||||
"name": "stdout",
|
||||
"output_type": "stream",
|
||||
"text": [
|
||||
"Relevant Camera Status Error Codes:\n",
|
||||
"========================================\n",
|
||||
"CAMERA_STATUS_SUCCESS: 0\n",
|
||||
"CAMERA_STATUS_DEVICE_IS_OPENED: -18\n",
|
||||
"CAMERA_STATUS_DEVICE_IS_CLOSED: -19\n",
|
||||
"CAMERA_STATUS_ACCESS_DENY: -45\n",
|
||||
"CAMERA_STATUS_DEVICE_LOST: -38\n",
|
||||
"CAMERA_STATUS_TIME_OUT: -12\n",
|
||||
"CAMERA_STATUS_BUSY: -28\n",
|
||||
"CAMERA_STATUS_NO_DEVICE_FOUND: -16\n"
|
||||
]
|
||||
}
|
||||
],
|
||||
"source": [
|
||||
"# Let's examine the relevant error codes from the SDK\n",
|
||||
"print(\"Relevant Camera Status Error Codes:\")\n",
|
||||
"print(\"=\" * 40)\n",
|
||||
"print(f\"CAMERA_STATUS_SUCCESS: {mvsdk.CAMERA_STATUS_SUCCESS}\")\n",
|
||||
"print(f\"CAMERA_STATUS_DEVICE_IS_OPENED: {mvsdk.CAMERA_STATUS_DEVICE_IS_OPENED}\")\n",
|
||||
"print(f\"CAMERA_STATUS_DEVICE_IS_CLOSED: {mvsdk.CAMERA_STATUS_DEVICE_IS_CLOSED}\")\n",
|
||||
"print(f\"CAMERA_STATUS_ACCESS_DENY: {mvsdk.CAMERA_STATUS_ACCESS_DENY}\")\n",
|
||||
"print(f\"CAMERA_STATUS_DEVICE_LOST: {mvsdk.CAMERA_STATUS_DEVICE_LOST}\")\n",
|
||||
"print(f\"CAMERA_STATUS_TIME_OUT: {mvsdk.CAMERA_STATUS_TIME_OUT}\")\n",
|
||||
"print(f\"CAMERA_STATUS_BUSY: {mvsdk.CAMERA_STATUS_BUSY}\")\n",
|
||||
"print(f\"CAMERA_STATUS_NO_DEVICE_FOUND: {mvsdk.CAMERA_STATUS_NO_DEVICE_FOUND}\")"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": 3,
|
||||
"id": "status-functions",
|
||||
"metadata": {},
|
||||
"outputs": [
|
||||
{
|
||||
"name": "stdout",
|
||||
"output_type": "stream",
|
||||
"text": [
|
||||
"Camera Availability Check\n",
|
||||
"==============================\n",
|
||||
"✓ SDK initialized successfully\n",
|
||||
"✓ Found 2 camera(s)\n",
|
||||
" 0: Blower-Yield-Cam (192.168.1.165-192.168.1.54)\n",
|
||||
" 1: Cracker-Cam (192.168.1.167-192.168.1.54)\n",
|
||||
"\n",
|
||||
"Testing camera 0: Blower-Yield-Cam\n",
|
||||
"✓ Camera is available (not opened by another process)\n",
|
||||
"✓ Camera initialized successfully\n",
|
||||
"✓ Camera closed after testing\n",
|
||||
"\n",
|
||||
"Testing camera 1: Cracker-Cam\n",
|
||||
"✓ Camera is available (not opened by another process)\n",
|
||||
"✓ Camera initialized successfully\n",
|
||||
"✓ Camera closed after testing\n",
|
||||
"\n",
|
||||
"Results for 2 cameras:\n",
|
||||
" Camera 0: AVAILABLE\n",
|
||||
" Camera 1: AVAILABLE\n"
|
||||
]
|
||||
}
|
||||
],
|
||||
"source": [
|
||||
"def check_camera_availability():\n",
|
||||
" \"\"\"\n",
|
||||
" Comprehensive camera availability check\n",
|
||||
" \"\"\"\n",
|
||||
" print(\"Camera Availability Check\")\n",
|
||||
" print(\"=\" * 30)\n",
|
||||
" \n",
|
||||
" # Step 1: Initialize SDK\n",
|
||||
" try:\n",
|
||||
" mvsdk.CameraSdkInit(1)\n",
|
||||
" print(\"✓ SDK initialized successfully\")\n",
|
||||
" except Exception as e:\n",
|
||||
" print(f\"✗ SDK initialization failed: {e}\")\n",
|
||||
" return None, \"SDK_INIT_FAILED\"\n",
|
||||
" \n",
|
||||
" # Step 2: Enumerate cameras\n",
|
||||
" try:\n",
|
||||
" DevList = mvsdk.CameraEnumerateDevice()\n",
|
||||
" nDev = len(DevList)\n",
|
||||
" print(f\"✓ Found {nDev} camera(s)\")\n",
|
||||
" \n",
|
||||
" if nDev < 1:\n",
|
||||
" print(\"✗ No cameras detected\")\n",
|
||||
" return None, \"NO_CAMERAS\"\n",
|
||||
" \n",
|
||||
" for i, DevInfo in enumerate(DevList):\n",
|
||||
" print(f\" {i}: {DevInfo.GetFriendlyName()} ({DevInfo.GetPortType()})\")\n",
|
||||
" \n",
|
||||
" except Exception as e:\n",
|
||||
" print(f\"✗ Camera enumeration failed: {e}\")\n",
|
||||
" return None, \"ENUM_FAILED\"\n",
|
||||
" \n",
|
||||
" # Step 3: Check all cameras\n",
|
||||
" camera_results = []\n",
|
||||
" \n",
|
||||
" for i, DevInfo in enumerate(DevList):\n",
|
||||
" print(f\"\\nTesting camera {i}: {DevInfo.GetFriendlyName()}\")\n",
|
||||
" \n",
|
||||
" # Check if camera is already opened\n",
|
||||
" try:\n",
|
||||
" is_opened = mvsdk.CameraIsOpened(DevInfo)\n",
|
||||
" if is_opened:\n",
|
||||
" print(\"✗ Camera is already opened by another process\")\n",
|
||||
" camera_results.append((DevInfo, \"ALREADY_OPENED\"))\n",
|
||||
" continue\n",
|
||||
" else:\n",
|
||||
" print(\"✓ Camera is available (not opened by another process)\")\n",
|
||||
" except Exception as e:\n",
|
||||
" print(f\"⚠ Could not check if camera is opened: {e}\")\n",
|
||||
" \n",
|
||||
" # Try to initialize camera\n",
|
||||
" try:\n",
|
||||
" hCamera = mvsdk.CameraInit(DevInfo, -1, -1)\n",
|
||||
" print(\"✓ Camera initialized successfully\")\n",
|
||||
" camera_results.append((hCamera, \"AVAILABLE\"))\n",
|
||||
" \n",
|
||||
" # Close the camera after testing\n",
|
||||
" try:\n",
|
||||
" mvsdk.CameraUnInit(hCamera)\n",
|
||||
" print(\"✓ Camera closed after testing\")\n",
|
||||
" except Exception as e:\n",
|
||||
" print(f\"⚠ Warning: Could not close camera: {e}\")\n",
|
||||
" \n",
|
||||
" except mvsdk.CameraException as e:\n",
|
||||
" print(f\"✗ Camera initialization failed: {e.error_code} - {e.message}\")\n",
|
||||
" \n",
|
||||
" # Analyze specific error codes\n",
|
||||
" if e.error_code == mvsdk.CAMERA_STATUS_DEVICE_IS_OPENED:\n",
|
||||
" camera_results.append((DevInfo, \"DEVICE_OPENED\"))\n",
|
||||
" elif e.error_code == mvsdk.CAMERA_STATUS_ACCESS_DENY:\n",
|
||||
" camera_results.append((DevInfo, \"ACCESS_DENIED\"))\n",
|
||||
" elif e.error_code == mvsdk.CAMERA_STATUS_DEVICE_LOST:\n",
|
||||
" camera_results.append((DevInfo, \"DEVICE_LOST\"))\n",
|
||||
" else:\n",
|
||||
" camera_results.append((DevInfo, f\"INIT_ERROR_{e.error_code}\"))\n",
|
||||
" \n",
|
||||
" except Exception as e:\n",
|
||||
" print(f\"✗ Unexpected error during initialization: {e}\")\n",
|
||||
" camera_results.append((DevInfo, \"UNEXPECTED_ERROR\"))\n",
|
||||
" \n",
|
||||
" return camera_results\n",
|
||||
"\n",
|
||||
"# Test the function\n",
|
||||
"camera_results = check_camera_availability()\n",
|
||||
"print(f\"\\nResults for {len(camera_results)} cameras:\")\n",
|
||||
"for i, (camera_info, status) in enumerate(camera_results):\n",
|
||||
" if hasattr(camera_info, 'GetFriendlyName'):\n",
|
||||
" name = camera_info.GetFriendlyName()\n",
|
||||
" else:\n",
|
||||
" name = f\"Camera {i}\"\n",
|
||||
" print(f\" {name}: {status}\")"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": 4,
|
||||
"id": "test-capture-availability",
|
||||
"metadata": {},
|
||||
"outputs": [
|
||||
{
|
||||
"name": "stdout",
|
||||
"output_type": "stream",
|
||||
"text": [
|
||||
"\n",
|
||||
"Testing capture readiness for 2 available camera(s):\n",
|
||||
"\n",
|
||||
"Testing camera 0 capture readiness...\n",
|
||||
"\n",
|
||||
"Testing Camera Capture Readiness\n",
|
||||
"===================================\n",
|
||||
"✓ Camera capabilities retrieved\n",
|
||||
"✓ Camera type: Color\n",
|
||||
"✓ Basic camera configuration set\n",
|
||||
"✓ Camera started\n",
|
||||
"✓ Frame buffer allocated\n",
|
||||
"\n",
|
||||
"Testing image capture...\n",
|
||||
"✓ Image captured successfully: 1280x1024\n",
|
||||
"✓ Image processed and buffer released\n",
|
||||
"✓ Cleanup completed\n",
|
||||
"Capture Ready for Blower-Yield-Cam: True\n",
|
||||
"\n",
|
||||
"Testing camera 1 capture readiness...\n",
|
||||
"\n",
|
||||
"Testing Camera Capture Readiness\n",
|
||||
"===================================\n",
|
||||
"✓ Camera capabilities retrieved\n",
|
||||
"✓ Camera type: Color\n",
|
||||
"✓ Basic camera configuration set\n",
|
||||
"✓ Camera started\n",
|
||||
"✓ Frame buffer allocated\n",
|
||||
"\n",
|
||||
"Testing image capture...\n",
|
||||
"✓ Image captured successfully: 1280x1024\n",
|
||||
"✓ Image processed and buffer released\n",
|
||||
"✓ Cleanup completed\n",
|
||||
"Capture Ready for Cracker-Cam: True\n"
|
||||
]
|
||||
}
|
||||
],
|
||||
"source": [
|
||||
"def test_camera_capture_readiness(hCamera):\n",
|
||||
" \"\"\"\n",
|
||||
" Test if camera is ready for image capture\n",
|
||||
" \"\"\"\n",
|
||||
" if not isinstance(hCamera, int):\n",
|
||||
" print(\"Camera not properly initialized, skipping capture test\")\n",
|
||||
" return False\n",
|
||||
" \n",
|
||||
" print(\"\\nTesting Camera Capture Readiness\")\n",
|
||||
" print(\"=\" * 35)\n",
|
||||
" \n",
|
||||
" try:\n",
|
||||
" # Get camera capabilities\n",
|
||||
" cap = mvsdk.CameraGetCapability(hCamera)\n",
|
||||
" print(\"✓ Camera capabilities retrieved\")\n",
|
||||
" \n",
|
||||
" # Check camera type\n",
|
||||
" monoCamera = (cap.sIspCapacity.bMonoSensor != 0)\n",
|
||||
" print(f\"✓ Camera type: {'Monochrome' if monoCamera else 'Color'}\")\n",
|
||||
" \n",
|
||||
" # Set basic configuration\n",
|
||||
" if monoCamera:\n",
|
||||
" mvsdk.CameraSetIspOutFormat(hCamera, mvsdk.CAMERA_MEDIA_TYPE_MONO8)\n",
|
||||
" else:\n",
|
||||
" mvsdk.CameraSetIspOutFormat(hCamera, mvsdk.CAMERA_MEDIA_TYPE_BGR8)\n",
|
||||
" \n",
|
||||
" mvsdk.CameraSetTriggerMode(hCamera, 0) # Continuous mode\n",
|
||||
" mvsdk.CameraSetAeState(hCamera, 0) # Manual exposure\n",
|
||||
" mvsdk.CameraSetExposureTime(hCamera, 5000) # 5ms exposure\n",
|
||||
" print(\"✓ Basic camera configuration set\")\n",
|
||||
" \n",
|
||||
" # Start camera\n",
|
||||
" mvsdk.CameraPlay(hCamera)\n",
|
||||
" print(\"✓ Camera started\")\n",
|
||||
" \n",
|
||||
" # Allocate buffer\n",
|
||||
" FrameBufferSize = cap.sResolutionRange.iWidthMax * cap.sResolutionRange.iHeightMax * (1 if monoCamera else 3)\n",
|
||||
" pFrameBuffer = mvsdk.CameraAlignMalloc(FrameBufferSize, 16)\n",
|
||||
" print(\"✓ Frame buffer allocated\")\n",
|
||||
" \n",
|
||||
" # Test image capture with short timeout\n",
|
||||
" print(\"\\nTesting image capture...\")\n",
|
||||
" try:\n",
|
||||
" pRawData, FrameHead = mvsdk.CameraGetImageBuffer(hCamera, 1000) # 1 second timeout\n",
|
||||
" print(f\"✓ Image captured successfully: {FrameHead.iWidth}x{FrameHead.iHeight}\")\n",
|
||||
" \n",
|
||||
" # Process and release\n",
|
||||
" mvsdk.CameraImageProcess(hCamera, pRawData, pFrameBuffer, FrameHead)\n",
|
||||
" mvsdk.CameraReleaseImageBuffer(hCamera, pRawData)\n",
|
||||
" print(\"✓ Image processed and buffer released\")\n",
|
||||
" \n",
|
||||
" capture_success = True\n",
|
||||
" \n",
|
||||
" except mvsdk.CameraException as e:\n",
|
||||
" print(f\"✗ Image capture failed: {e.error_code} - {e.message}\")\n",
|
||||
" \n",
|
||||
" if e.error_code == mvsdk.CAMERA_STATUS_TIME_OUT:\n",
|
||||
" print(\" → Camera timeout - may be busy or not streaming\")\n",
|
||||
" elif e.error_code == mvsdk.CAMERA_STATUS_DEVICE_LOST:\n",
|
||||
" print(\" → Device lost - camera disconnected\")\n",
|
||||
" elif e.error_code == mvsdk.CAMERA_STATUS_BUSY:\n",
|
||||
" print(\" → Camera busy - may be used by another process\")\n",
|
||||
" \n",
|
||||
" capture_success = False\n",
|
||||
" \n",
|
||||
" # Cleanup\n",
|
||||
" mvsdk.CameraAlignFree(pFrameBuffer)\n",
|
||||
" print(\"✓ Cleanup completed\")\n",
|
||||
" \n",
|
||||
" return capture_success\n",
|
||||
" \n",
|
||||
" except Exception as e:\n",
|
||||
" print(f\"✗ Capture readiness test failed: {e}\")\n",
|
||||
" return False\n",
|
||||
"\n",
|
||||
"# Test capture readiness for available cameras\n",
|
||||
"available_cameras = [(cam, stat) for cam, stat in camera_results if stat == \"AVAILABLE\"]\n",
|
||||
"\n",
|
||||
"if available_cameras:\n",
|
||||
" print(f\"\\nTesting capture readiness for {len(available_cameras)} available camera(s):\")\n",
|
||||
" for i, (camera_handle, status) in enumerate(available_cameras):\n",
|
||||
" if hasattr(camera_handle, 'GetFriendlyName'):\n",
|
||||
" # This shouldn't happen for AVAILABLE cameras, but just in case\n",
|
||||
" print(f\"\\nCamera {i}: Invalid handle\")\n",
|
||||
" continue\n",
|
||||
" \n",
|
||||
" print(f\"\\nTesting camera {i} capture readiness...\")\n",
|
||||
" # Re-initialize the camera for testing since we closed it earlier\n",
|
||||
" try:\n",
|
||||
" # Find the camera info from the original results\n",
|
||||
" DevList = mvsdk.CameraEnumerateDevice()\n",
|
||||
" if i < len(DevList):\n",
|
||||
" DevInfo = DevList[i]\n",
|
||||
" hCamera = mvsdk.CameraInit(DevInfo, -1, -1)\n",
|
||||
" capture_ready = test_camera_capture_readiness(hCamera)\n",
|
||||
" print(f\"Capture Ready for {DevInfo.GetFriendlyName()}: {capture_ready}\")\n",
|
||||
" mvsdk.CameraUnInit(hCamera)\n",
|
||||
" else:\n",
|
||||
" print(f\"Could not re-initialize camera {i}\")\n",
|
||||
" except Exception as e:\n",
|
||||
" print(f\"Error testing camera {i}: {e}\")\n",
|
||||
"else:\n",
|
||||
" print(\"\\nNo cameras are available for capture testing\")\n",
|
||||
" print(\"Camera statuses:\")\n",
|
||||
" for i, (cam_info, status) in enumerate(camera_results):\n",
|
||||
" if hasattr(cam_info, 'GetFriendlyName'):\n",
|
||||
" name = cam_info.GetFriendlyName()\n",
|
||||
" else:\n",
|
||||
" name = f\"Camera {i}\"\n",
|
||||
" print(f\" {name}: {status}\")"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": 5,
|
||||
"id": "comprehensive-check",
|
||||
"metadata": {},
|
||||
"outputs": [
|
||||
{
|
||||
"name": "stdout",
|
||||
"output_type": "stream",
|
||||
"text": [
|
||||
"\n",
|
||||
"==================================================\n",
|
||||
"COMPREHENSIVE CAMERA CHECK\n",
|
||||
"==================================================\n",
|
||||
"Camera Availability Check\n",
|
||||
"==============================\n",
|
||||
"✓ SDK initialized successfully\n",
|
||||
"✓ Found 2 camera(s)\n",
|
||||
" 0: Blower-Yield-Cam (192.168.1.165-192.168.1.54)\n",
|
||||
" 1: Cracker-Cam (192.168.1.167-192.168.1.54)\n",
|
||||
"\n",
|
||||
"Testing camera 0: Blower-Yield-Cam\n",
|
||||
"✓ Camera is available (not opened by another process)\n",
|
||||
"✓ Camera initialized successfully\n",
|
||||
"✓ Camera closed after testing\n",
|
||||
"\n",
|
||||
"Testing camera 1: Cracker-Cam\n",
|
||||
"✓ Camera is available (not opened by another process)\n",
|
||||
"✓ Camera initialized successfully\n",
|
||||
"✓ Camera closed after testing\n",
|
||||
"\n",
|
||||
"==================================================\n",
|
||||
"FINAL RESULTS:\n",
|
||||
"Camera Available: False\n",
|
||||
"Capture Ready: False\n",
|
||||
"Status: (6, 'AVAILABLE')\n",
|
||||
"==================================================\n"
|
||||
]
|
||||
}
|
||||
],
|
||||
"source": [
|
||||
"def comprehensive_camera_check():\n",
|
||||
" \"\"\"\n",
|
||||
" Complete camera availability and readiness check\n",
|
||||
" Returns: (available, ready, handle_or_info, status_message)\n",
|
||||
" \"\"\"\n",
|
||||
" # Check availability\n",
|
||||
" handle_or_info, status = check_camera_availability()\n",
|
||||
" \n",
|
||||
" available = status == \"AVAILABLE\"\n",
|
||||
" ready = False\n",
|
||||
" \n",
|
||||
" if available:\n",
|
||||
" # Test capture readiness\n",
|
||||
" ready = test_camera_capture_readiness(handle_or_info)\n",
|
||||
" \n",
|
||||
" # Close camera after testing\n",
|
||||
" try:\n",
|
||||
" mvsdk.CameraUnInit(handle_or_info)\n",
|
||||
" print(\"✓ Camera closed after testing\")\n",
|
||||
" except:\n",
|
||||
" pass\n",
|
||||
" \n",
|
||||
" return available, ready, handle_or_info, status\n",
|
||||
"\n",
|
||||
"# Run comprehensive check\n",
|
||||
"print(\"\\n\" + \"=\" * 50)\n",
|
||||
"print(\"COMPREHENSIVE CAMERA CHECK\")\n",
|
||||
"print(\"=\" * 50)\n",
|
||||
"\n",
|
||||
"available, ready, info, status_msg = comprehensive_camera_check()\n",
|
||||
"\n",
|
||||
"print(\"\\n\" + \"=\" * 50)\n",
|
||||
"print(\"FINAL RESULTS:\")\n",
|
||||
"print(f\"Camera Available: {available}\")\n",
|
||||
"print(f\"Capture Ready: {ready}\")\n",
|
||||
"print(f\"Status: {status_msg}\")\n",
|
||||
"print(\"=\" * 50)"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": 6,
|
||||
"id": "status-check-function",
|
||||
"metadata": {},
|
||||
"outputs": [
|
||||
{
|
||||
"name": "stdout",
|
||||
"output_type": "stream",
|
||||
"text": [
|
||||
"\n",
|
||||
"Testing Simple Camera Ready Check:\n",
|
||||
"========================================\n",
|
||||
"Ready: True\n",
|
||||
"Message: Camera 'Blower-Yield-Cam' is ready for capture\n",
|
||||
"Camera: Blower-Yield-Cam\n"
|
||||
]
|
||||
}
|
||||
],
|
||||
"source": [
|
||||
"def is_camera_ready_for_capture():\n",
|
||||
" \"\"\"\n",
|
||||
" Simple function to check if camera is ready for capture.\n",
|
||||
" Returns: (ready: bool, message: str, camera_info: object or None)\n",
|
||||
" \n",
|
||||
" This is the function you can use in your main capture script.\n",
|
||||
" \"\"\"\n",
|
||||
" try:\n",
|
||||
" # Initialize SDK\n",
|
||||
" mvsdk.CameraSdkInit(1)\n",
|
||||
" \n",
|
||||
" # Enumerate cameras\n",
|
||||
" DevList = mvsdk.CameraEnumerateDevice()\n",
|
||||
" if len(DevList) < 1:\n",
|
||||
" return False, \"No cameras found\", None\n",
|
||||
" \n",
|
||||
" DevInfo = DevList[0]\n",
|
||||
" \n",
|
||||
" # Check if already opened\n",
|
||||
" try:\n",
|
||||
" if mvsdk.CameraIsOpened(DevInfo):\n",
|
||||
" return False, f\"Camera '{DevInfo.GetFriendlyName()}' is already opened by another process\", DevInfo\n",
|
||||
" except:\n",
|
||||
" pass # Some cameras might not support this check\n",
|
||||
" \n",
|
||||
" # Try to initialize\n",
|
||||
" try:\n",
|
||||
" hCamera = mvsdk.CameraInit(DevInfo, -1, -1)\n",
|
||||
" \n",
|
||||
" # Quick capture test\n",
|
||||
" try:\n",
|
||||
" # Basic setup\n",
|
||||
" mvsdk.CameraSetTriggerMode(hCamera, 0)\n",
|
||||
" mvsdk.CameraPlay(hCamera)\n",
|
||||
" \n",
|
||||
" # Try to get one frame with short timeout\n",
|
||||
" pRawData, FrameHead = mvsdk.CameraGetImageBuffer(hCamera, 500) # 0.5 second timeout\n",
|
||||
" mvsdk.CameraReleaseImageBuffer(hCamera, pRawData)\n",
|
||||
" \n",
|
||||
" # Success - close and return\n",
|
||||
" mvsdk.CameraUnInit(hCamera)\n",
|
||||
" return True, f\"Camera '{DevInfo.GetFriendlyName()}' is ready for capture\", DevInfo\n",
|
||||
" \n",
|
||||
" except mvsdk.CameraException as e:\n",
|
||||
" mvsdk.CameraUnInit(hCamera)\n",
|
||||
" if e.error_code == mvsdk.CAMERA_STATUS_TIME_OUT:\n",
|
||||
" return False, \"Camera timeout - may be busy or not streaming properly\", DevInfo\n",
|
||||
" else:\n",
|
||||
" return False, f\"Camera capture test failed: {e.message}\", DevInfo\n",
|
||||
" \n",
|
||||
" except mvsdk.CameraException as e:\n",
|
||||
" if e.error_code == mvsdk.CAMERA_STATUS_DEVICE_IS_OPENED:\n",
|
||||
" return False, f\"Camera '{DevInfo.GetFriendlyName()}' is already in use\", DevInfo\n",
|
||||
" elif e.error_code == mvsdk.CAMERA_STATUS_ACCESS_DENY:\n",
|
||||
" return False, f\"Access denied to camera '{DevInfo.GetFriendlyName()}'\", DevInfo\n",
|
||||
" else:\n",
|
||||
" return False, f\"Camera initialization failed: {e.message}\", DevInfo\n",
|
||||
" \n",
|
||||
" except Exception as e:\n",
|
||||
" return False, f\"Camera check failed: {str(e)}\", None\n",
|
||||
"\n",
|
||||
"# Test the simple function\n",
|
||||
"print(\"\\nTesting Simple Camera Ready Check:\")\n",
|
||||
"print(\"=\" * 40)\n",
|
||||
"\n",
|
||||
"ready, message, camera_info = is_camera_ready_for_capture()\n",
|
||||
"print(f\"Ready: {ready}\")\n",
|
||||
"print(f\"Message: {message}\")\n",
|
||||
"if camera_info:\n",
|
||||
" print(f\"Camera: {camera_info.GetFriendlyName()}\")"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "markdown",
|
||||
"id": "usage-example",
|
||||
"metadata": {},
|
||||
"source": [
|
||||
"## Usage Example\n",
|
||||
"\n",
|
||||
"Here's how you can integrate the camera status check into your capture script:\n",
|
||||
"\n",
|
||||
"```python\n",
|
||||
"# Before attempting to capture images\n",
|
||||
"ready, message, camera_info = is_camera_ready_for_capture()\n",
|
||||
"\n",
|
||||
"if not ready:\n",
|
||||
" print(f\"Camera not ready: {message}\")\n",
|
||||
" # Handle the error appropriately\n",
|
||||
" return False\n",
|
||||
"\n",
|
||||
"print(f\"Camera ready: {message}\")\n",
|
||||
"# Proceed with normal capture logic\n",
|
||||
"```\n",
|
||||
"\n",
|
||||
"## Key Findings\n",
|
||||
"\n",
|
||||
"1. **`CameraIsOpened()`** - Checks if camera is opened by another process\n",
|
||||
"2. **`CameraInit()` error codes** - Provide specific failure reasons\n",
|
||||
"3. **Quick capture test** - Verifies camera is actually streaming\n",
|
||||
"4. **Timeout handling** - Detects if camera is busy/unresponsive\n",
|
||||
"\n",
|
||||
"The most reliable approach is to:\n",
|
||||
"1. Check if camera exists\n",
|
||||
"2. Check if it's already opened\n",
|
||||
"3. Try to initialize it\n",
|
||||
"4. Test actual image capture with short timeout\n",
|
||||
"5. Clean up properly"
|
||||
]
|
||||
}
|
||||
],
|
||||
"metadata": {
|
||||
"kernelspec": {
|
||||
"display_name": "USDA-vision-cameras",
|
||||
"language": "python",
|
||||
"name": "python3"
|
||||
},
|
||||
"language_info": {
|
||||
"codemirror_mode": {
|
||||
"name": "ipython",
|
||||
"version": 3
|
||||
},
|
||||
"file_extension": ".py",
|
||||
"mimetype": "text/x-python",
|
||||
"name": "python",
|
||||
"nbconvert_exporter": "python",
|
||||
"pygments_lexer": "ipython3",
|
||||
"version": "3.11.2"
|
||||
}
|
||||
},
|
||||
"nbformat": 4,
|
||||
"nbformat_minor": 5
|
||||
}
|
||||
349
old tests/camera_test_setup.ipynb
Normal file
349
old tests/camera_test_setup.ipynb
Normal file
@@ -0,0 +1,349 @@
|
||||
{
|
||||
"cells": [
|
||||
{
|
||||
"cell_type": "markdown",
|
||||
"metadata": {},
|
||||
"source": [
|
||||
"# GigE Camera Test Setup\n",
|
||||
"\n",
|
||||
"This notebook helps you test and configure your GigE cameras for the USDA vision project.\n",
|
||||
"\n",
|
||||
"## Key Features:\n",
|
||||
"- Test camera connectivity\n",
|
||||
"- Display images inline (no GUI needed)\n",
|
||||
"- Save test images/videos to `/storage`\n",
|
||||
"- Configure camera parameters\n",
|
||||
"- Test recording functionality"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": null,
|
||||
"metadata": {},
|
||||
"outputs": [],
|
||||
"source": [
|
||||
"import cv2\n",
|
||||
"import numpy as np\n",
|
||||
"import matplotlib.pyplot as plt\n",
|
||||
"import os\n",
|
||||
"from datetime import datetime\n",
|
||||
"import time\n",
|
||||
"from pathlib import Path\n",
|
||||
"import imageio\n",
|
||||
"from tqdm import tqdm\n",
|
||||
"\n",
|
||||
"# Configure matplotlib for inline display\n",
|
||||
"plt.rcParams['figure.figsize'] = (12, 8)\n",
|
||||
"plt.rcParams['image.cmap'] = 'gray'\n",
|
||||
"\n",
|
||||
"print(\"✅ All imports successful!\")\n",
|
||||
"print(f\"OpenCV version: {cv2.__version__}\")\n",
|
||||
"print(f\"NumPy version: {np.__version__}\")"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "markdown",
|
||||
"metadata": {},
|
||||
"source": [
|
||||
"## Utility Functions"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": null,
|
||||
"metadata": {},
|
||||
"outputs": [],
|
||||
"source": [
|
||||
"def display_image(image, title=\"Image\", figsize=(10, 8)):\n",
|
||||
" \"\"\"Display image inline in Jupyter notebook\"\"\"\n",
|
||||
" plt.figure(figsize=figsize)\n",
|
||||
" if len(image.shape) == 3:\n",
|
||||
" # Convert BGR to RGB for matplotlib\n",
|
||||
" image_rgb = cv2.cvtColor(image, cv2.COLOR_BGR2RGB)\n",
|
||||
" plt.imshow(image_rgb)\n",
|
||||
" else:\n",
|
||||
" plt.imshow(image, cmap='gray')\n",
|
||||
" plt.title(title)\n",
|
||||
" plt.axis('off')\n",
|
||||
" plt.tight_layout()\n",
|
||||
" plt.show()\n",
|
||||
"\n",
|
||||
"def save_image_to_storage(image, filename_prefix=\"test_image\"):\n",
|
||||
" \"\"\"Save image to /storage with timestamp\"\"\"\n",
|
||||
" timestamp = datetime.now().strftime(\"%Y%m%d_%H%M%S\")\n",
|
||||
" filename = f\"{filename_prefix}_{timestamp}.jpg\"\n",
|
||||
" filepath = f\"/storage/{filename}\"\n",
|
||||
" \n",
|
||||
" success = cv2.imwrite(filepath, image)\n",
|
||||
" if success:\n",
|
||||
" print(f\"✅ Image saved: {filepath}\")\n",
|
||||
" return filepath\n",
|
||||
" else:\n",
|
||||
" print(f\"❌ Failed to save image: {filepath}\")\n",
|
||||
" return None\n",
|
||||
"\n",
|
||||
"def create_storage_subdir(subdir_name):\n",
|
||||
" \"\"\"Create subdirectory in /storage\"\"\"\n",
|
||||
" path = Path(f\"/storage/{subdir_name}\")\n",
|
||||
" path.mkdir(exist_ok=True)\n",
|
||||
" print(f\"📁 Directory ready: {path}\")\n",
|
||||
" return str(path)\n",
|
||||
"\n",
|
||||
"def list_available_cameras():\n",
|
||||
" \"\"\"List all available camera devices\"\"\"\n",
|
||||
" print(\"🔍 Scanning for available cameras...\")\n",
|
||||
" available_cameras = []\n",
|
||||
" \n",
|
||||
" # Test camera indices 0-10\n",
|
||||
" for i in range(11):\n",
|
||||
" cap = cv2.VideoCapture(i)\n",
|
||||
" if cap.isOpened():\n",
|
||||
" ret, frame = cap.read()\n",
|
||||
" if ret:\n",
|
||||
" available_cameras.append(i)\n",
|
||||
" print(f\"📷 Camera {i}: Available (Resolution: {frame.shape[1]}x{frame.shape[0]})\")\n",
|
||||
" cap.release()\n",
|
||||
" else:\n",
|
||||
" # Try with different backends for GigE cameras\n",
|
||||
" cap = cv2.VideoCapture(i, cv2.CAP_GSTREAMER)\n",
|
||||
" if cap.isOpened():\n",
|
||||
" ret, frame = cap.read()\n",
|
||||
" if ret:\n",
|
||||
" available_cameras.append(i)\n",
|
||||
" print(f\"📷 Camera {i}: Available via GStreamer (Resolution: {frame.shape[1]}x{frame.shape[0]})\")\n",
|
||||
" cap.release()\n",
|
||||
" \n",
|
||||
" if not available_cameras:\n",
|
||||
" print(\"❌ No cameras found\")\n",
|
||||
" \n",
|
||||
" return available_cameras\n",
|
||||
"\n",
|
||||
"print(\"✅ Utility functions loaded!\")"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "markdown",
|
||||
"metadata": {},
|
||||
"source": [
|
||||
"## Step 1: Check Storage Directory"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": null,
|
||||
"metadata": {},
|
||||
"outputs": [],
|
||||
"source": [
|
||||
"# Check storage directory\n",
|
||||
"storage_path = Path(\"/storage\")\n",
|
||||
"print(f\"Storage directory exists: {storage_path.exists()}\")\n",
|
||||
"print(f\"Storage directory writable: {os.access('/storage', os.W_OK)}\")\n",
|
||||
"\n",
|
||||
"# Create test subdirectories\n",
|
||||
"test_images_dir = create_storage_subdir(\"test_images\")\n",
|
||||
"test_videos_dir = create_storage_subdir(\"test_videos\")\n",
|
||||
"camera1_dir = create_storage_subdir(\"camera1\")\n",
|
||||
"camera2_dir = create_storage_subdir(\"camera2\")"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "markdown",
|
||||
"metadata": {},
|
||||
"source": [
|
||||
"## Step 2: Scan for Available Cameras"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": null,
|
||||
"metadata": {},
|
||||
"outputs": [],
|
||||
"source": [
|
||||
"# Scan for cameras\n",
|
||||
"cameras = list_available_cameras()\n",
|
||||
"print(f\"\\n📊 Summary: Found {len(cameras)} camera(s): {cameras}\")"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "markdown",
|
||||
"metadata": {},
|
||||
"source": [
|
||||
"## Step 3: Test Individual Camera"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": null,
|
||||
"metadata": {},
|
||||
"outputs": [],
|
||||
"source": [
|
||||
"# Test a specific camera (change camera_id as needed)\n",
|
||||
"camera_id = 0 # Change this to test different cameras\n",
|
||||
"\n",
|
||||
"print(f\"🔧 Testing camera {camera_id}...\")\n",
|
||||
"\n",
|
||||
"# Try different backends for GigE cameras\n",
|
||||
"backends_to_try = [\n",
|
||||
" (cv2.CAP_ANY, \"Default\"),\n",
|
||||
" (cv2.CAP_GSTREAMER, \"GStreamer\"),\n",
|
||||
" (cv2.CAP_V4L2, \"V4L2\"),\n",
|
||||
" (cv2.CAP_FFMPEG, \"FFmpeg\")\n",
|
||||
"]\n",
|
||||
"\n",
|
||||
"successful_backend = None\n",
|
||||
"cap = None\n",
|
||||
"\n",
|
||||
"for backend, name in backends_to_try:\n",
|
||||
" print(f\" Trying {name} backend...\")\n",
|
||||
" cap = cv2.VideoCapture(camera_id, backend)\n",
|
||||
" if cap.isOpened():\n",
|
||||
" ret, frame = cap.read()\n",
|
||||
" if ret:\n",
|
||||
" print(f\" ✅ {name} backend works!\")\n",
|
||||
" successful_backend = (backend, name)\n",
|
||||
" break\n",
|
||||
" else:\n",
|
||||
" print(f\" ❌ {name} backend opened but can't read frames\")\n",
|
||||
" else:\n",
|
||||
" print(f\" ❌ {name} backend failed to open\")\n",
|
||||
" cap.release()\n",
|
||||
"\n",
|
||||
"if successful_backend:\n",
|
||||
" backend, backend_name = successful_backend\n",
|
||||
" cap = cv2.VideoCapture(camera_id, backend)\n",
|
||||
" \n",
|
||||
" # Get camera properties\n",
|
||||
" width = int(cap.get(cv2.CAP_PROP_FRAME_WIDTH))\n",
|
||||
" height = int(cap.get(cv2.CAP_PROP_FRAME_HEIGHT))\n",
|
||||
" fps = cap.get(cv2.CAP_PROP_FPS)\n",
|
||||
" \n",
|
||||
" print(f\"\\n📷 Camera {camera_id} Properties ({backend_name}):\")\n",
|
||||
" print(f\" Resolution: {width}x{height}\")\n",
|
||||
" print(f\" FPS: {fps}\")\n",
|
||||
" \n",
|
||||
" # Capture a test frame\n",
|
||||
" ret, frame = cap.read()\n",
|
||||
" if ret:\n",
|
||||
" print(f\" Frame shape: {frame.shape}\")\n",
|
||||
" print(f\" Frame dtype: {frame.dtype}\")\n",
|
||||
" \n",
|
||||
" # Display the frame\n",
|
||||
" display_image(frame, f\"Camera {camera_id} Test Frame\")\n",
|
||||
" \n",
|
||||
" # Save test image\n",
|
||||
" save_image_to_storage(frame, f\"camera_{camera_id}_test\")\n",
|
||||
" else:\n",
|
||||
" print(\" ❌ Failed to capture frame\")\n",
|
||||
" \n",
|
||||
" cap.release()\n",
|
||||
"else:\n",
|
||||
" print(f\"❌ Camera {camera_id} not accessible with any backend\")"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "markdown",
|
||||
"metadata": {},
|
||||
"source": [
|
||||
"## Step 4: Test Video Recording"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": null,
|
||||
"metadata": {},
|
||||
"outputs": [],
|
||||
"source": [
|
||||
"# Test video recording\n",
|
||||
"def test_video_recording(camera_id, duration_seconds=5, fps=30):\n",
|
||||
" \"\"\"Test video recording from camera\"\"\"\n",
|
||||
" print(f\"🎥 Testing video recording from camera {camera_id} for {duration_seconds} seconds...\")\n",
|
||||
" \n",
|
||||
" # Open camera\n",
|
||||
" cap = cv2.VideoCapture(camera_id)\n",
|
||||
" if not cap.isOpened():\n",
|
||||
" print(f\"❌ Cannot open camera {camera_id}\")\n",
|
||||
" return None\n",
|
||||
" \n",
|
||||
" # Get camera properties\n",
|
||||
" width = int(cap.get(cv2.CAP_PROP_FRAME_WIDTH))\n",
|
||||
" height = int(cap.get(cv2.CAP_PROP_FRAME_HEIGHT))\n",
|
||||
" \n",
|
||||
" # Create video writer\n",
|
||||
" timestamp = datetime.now().strftime(\"%Y%m%d_%H%M%S\")\n",
|
||||
" video_filename = f\"/storage/test_videos/camera_{camera_id}_test_{timestamp}.mp4\"\n",
|
||||
" \n",
|
||||
" fourcc = cv2.VideoWriter_fourcc(*'mp4v')\n",
|
||||
" out = cv2.VideoWriter(video_filename, fourcc, fps, (width, height))\n",
|
||||
" \n",
|
||||
" if not out.isOpened():\n",
|
||||
" print(\"❌ Cannot create video writer\")\n",
|
||||
" cap.release()\n",
|
||||
" return None\n",
|
||||
" \n",
|
||||
" # Record video\n",
|
||||
" frames_to_capture = duration_seconds * fps\n",
|
||||
" frames_captured = 0\n",
|
||||
" \n",
|
||||
" print(f\"Recording {frames_to_capture} frames...\")\n",
|
||||
" \n",
|
||||
" with tqdm(total=frames_to_capture, desc=\"Recording\") as pbar:\n",
|
||||
" start_time = time.time()\n",
|
||||
" \n",
|
||||
" while frames_captured < frames_to_capture:\n",
|
||||
" ret, frame = cap.read()\n",
|
||||
" if ret:\n",
|
||||
" out.write(frame)\n",
|
||||
" frames_captured += 1\n",
|
||||
" pbar.update(1)\n",
|
||||
" \n",
|
||||
" # Display first frame\n",
|
||||
" if frames_captured == 1:\n",
|
||||
" display_image(frame, f\"First frame from camera {camera_id}\")\n",
|
||||
" else:\n",
|
||||
" print(f\"❌ Failed to read frame {frames_captured}\")\n",
|
||||
" break\n",
|
||||
" \n",
|
||||
" # Cleanup\n",
|
||||
" cap.release()\n",
|
||||
" out.release()\n",
|
||||
" \n",
|
||||
" elapsed_time = time.time() - start_time\n",
|
||||
" actual_fps = frames_captured / elapsed_time\n",
|
||||
" \n",
|
||||
" print(f\"✅ Video saved: {video_filename}\")\n",
|
||||
" print(f\"📊 Captured {frames_captured} frames in {elapsed_time:.2f}s\")\n",
|
||||
" print(f\"📊 Actual FPS: {actual_fps:.2f}\")\n",
|
||||
" \n",
|
||||
" return video_filename\n",
|
||||
"\n",
|
||||
"# Test recording (change camera_id as needed)\n",
|
||||
"if cameras: # Only test if cameras were found\n",
|
||||
" test_camera = cameras[0] # Use first available camera\n",
|
||||
" video_file = test_video_recording(test_camera, duration_seconds=3)\n",
|
||||
"else:\n",
|
||||
" print(\"⚠️ No cameras available for video test\")"
|
||||
]
|
||||
}
|
||||
],
|
||||
"metadata": {
|
||||
"kernelspec": {
|
||||
"display_name": "usda-vision-cameras",
|
||||
"language": "python",
|
||||
"name": "usda-vision-cameras"
|
||||
},
|
||||
"language_info": {
|
||||
"codemirror_mode": {
|
||||
"name": "ipython",
|
||||
"version": 3
|
||||
},
|
||||
"file_extension": ".py",
|
||||
"mimetype": "text/x-python",
|
||||
"name": "python",
|
||||
"nbconvert_exporter": "python",
|
||||
"pygments_lexer": "ipython3",
|
||||
"version": "3.11.0"
|
||||
}
|
||||
},
|
||||
"nbformat": 4,
|
||||
"nbformat_minor": 4
|
||||
}
|
||||
439
old tests/camera_video_recorder.py
Normal file
439
old tests/camera_video_recorder.py
Normal file
@@ -0,0 +1,439 @@
|
||||
# coding=utf-8
|
||||
import cv2
|
||||
import numpy as np
|
||||
import platform
|
||||
import time
|
||||
import threading
|
||||
from datetime import datetime
|
||||
import os
|
||||
import sys
|
||||
|
||||
# Add the python demo directory to path to import mvsdk
|
||||
sys.path.append("python demo")
|
||||
|
||||
import mvsdk
|
||||
|
||||
|
||||
class CameraVideoRecorder:
|
||||
def __init__(self):
|
||||
self.hCamera = 0
|
||||
self.pFrameBuffer = 0
|
||||
self.cap = None
|
||||
self.monoCamera = False
|
||||
self.recording = False
|
||||
self.video_writer = None
|
||||
self.frame_count = 0
|
||||
self.start_time = None
|
||||
|
||||
def list_cameras(self):
|
||||
"""List all available cameras"""
|
||||
try:
|
||||
# Initialize SDK
|
||||
mvsdk.CameraSdkInit(1)
|
||||
except Exception as e:
|
||||
print(f"SDK initialization failed: {e}")
|
||||
return []
|
||||
|
||||
# Enumerate cameras
|
||||
DevList = mvsdk.CameraEnumerateDevice()
|
||||
nDev = len(DevList)
|
||||
|
||||
if nDev < 1:
|
||||
print("No cameras found!")
|
||||
return []
|
||||
|
||||
print(f"\nFound {nDev} camera(s):")
|
||||
cameras = []
|
||||
for i, DevInfo in enumerate(DevList):
|
||||
camera_info = {"index": i, "name": DevInfo.GetFriendlyName(), "port_type": DevInfo.GetPortType(), "serial": DevInfo.GetSn(), "dev_info": DevInfo}
|
||||
cameras.append(camera_info)
|
||||
print(f"{i}: {camera_info['name']} ({camera_info['port_type']}) - SN: {camera_info['serial']}")
|
||||
|
||||
return cameras
|
||||
|
||||
def initialize_camera(self, dev_info, exposure_ms=1.0, gain=3.5, target_fps=3.0):
|
||||
"""Initialize camera with specified settings"""
|
||||
self.target_fps = target_fps
|
||||
try:
|
||||
# Initialize camera
|
||||
self.hCamera = mvsdk.CameraInit(dev_info, -1, -1)
|
||||
print(f"Camera initialized successfully")
|
||||
|
||||
# Get camera capabilities
|
||||
self.cap = mvsdk.CameraGetCapability(self.hCamera)
|
||||
self.monoCamera = self.cap.sIspCapacity.bMonoSensor != 0
|
||||
print(f"Camera type: {'Monochrome' if self.monoCamera else 'Color'}")
|
||||
|
||||
# Set output format
|
||||
if self.monoCamera:
|
||||
mvsdk.CameraSetIspOutFormat(self.hCamera, mvsdk.CAMERA_MEDIA_TYPE_MONO8)
|
||||
else:
|
||||
mvsdk.CameraSetIspOutFormat(self.hCamera, mvsdk.CAMERA_MEDIA_TYPE_BGR8)
|
||||
|
||||
# Calculate RGB buffer size
|
||||
FrameBufferSize = self.cap.sResolutionRange.iWidthMax * self.cap.sResolutionRange.iHeightMax * (1 if self.monoCamera else 3)
|
||||
|
||||
# Allocate RGB buffer
|
||||
self.pFrameBuffer = mvsdk.CameraAlignMalloc(FrameBufferSize, 16)
|
||||
|
||||
# Set camera to continuous capture mode
|
||||
mvsdk.CameraSetTriggerMode(self.hCamera, 0)
|
||||
|
||||
# Set manual exposure
|
||||
mvsdk.CameraSetAeState(self.hCamera, 0) # Disable auto exposure
|
||||
exposure_time_us = exposure_ms * 1000 # Convert ms to microseconds
|
||||
|
||||
# Get exposure range and clamp value
|
||||
try:
|
||||
exp_min, exp_max, exp_step = mvsdk.CameraGetExposureTimeRange(self.hCamera)
|
||||
exposure_time_us = max(exp_min, min(exp_max, exposure_time_us))
|
||||
print(f"Exposure range: {exp_min:.1f} - {exp_max:.1f} μs")
|
||||
except Exception as e:
|
||||
print(f"Could not get exposure range: {e}")
|
||||
|
||||
mvsdk.CameraSetExposureTime(self.hCamera, exposure_time_us)
|
||||
print(f"Set exposure time: {exposure_time_us/1000:.1f}ms")
|
||||
|
||||
# Set analog gain
|
||||
try:
|
||||
gain_min, gain_max, gain_step = mvsdk.CameraGetAnalogGainXRange(self.hCamera)
|
||||
gain = max(gain_min, min(gain_max, gain))
|
||||
mvsdk.CameraSetAnalogGainX(self.hCamera, gain)
|
||||
print(f"Set analog gain: {gain:.2f}x (range: {gain_min:.2f} - {gain_max:.2f})")
|
||||
except Exception as e:
|
||||
print(f"Could not set analog gain: {e}")
|
||||
|
||||
# Start camera
|
||||
mvsdk.CameraPlay(self.hCamera)
|
||||
print("Camera started successfully")
|
||||
|
||||
return True
|
||||
|
||||
except mvsdk.CameraException as e:
|
||||
print(f"Camera initialization failed({e.error_code}): {e.message}")
|
||||
return False
|
||||
|
||||
def start_recording(self, output_filename=None):
|
||||
"""Start video recording"""
|
||||
if self.recording:
|
||||
print("Already recording!")
|
||||
return False
|
||||
|
||||
if not output_filename:
|
||||
timestamp = datetime.now().strftime("%Y%m%d_%H%M%S")
|
||||
output_filename = f"video_{timestamp}.avi"
|
||||
|
||||
# Create output directory if it doesn't exist
|
||||
os.makedirs(os.path.dirname(output_filename) if os.path.dirname(output_filename) else ".", exist_ok=True)
|
||||
|
||||
# Get first frame to determine video properties
|
||||
try:
|
||||
pRawData, FrameHead = mvsdk.CameraGetImageBuffer(self.hCamera, 2000)
|
||||
mvsdk.CameraImageProcess(self.hCamera, pRawData, self.pFrameBuffer, FrameHead)
|
||||
mvsdk.CameraReleaseImageBuffer(self.hCamera, pRawData)
|
||||
|
||||
# Handle Windows frame flipping
|
||||
if platform.system() == "Windows":
|
||||
mvsdk.CameraFlipFrameBuffer(self.pFrameBuffer, FrameHead, 1)
|
||||
|
||||
# Convert to numpy array
|
||||
frame_data = (mvsdk.c_ubyte * FrameHead.uBytes).from_address(self.pFrameBuffer)
|
||||
frame = np.frombuffer(frame_data, dtype=np.uint8)
|
||||
|
||||
if self.monoCamera:
|
||||
frame = frame.reshape((FrameHead.iHeight, FrameHead.iWidth))
|
||||
# Convert mono to BGR for video writer
|
||||
frame = cv2.cvtColor(frame, cv2.COLOR_GRAY2BGR)
|
||||
else:
|
||||
frame = frame.reshape((FrameHead.iHeight, FrameHead.iWidth, 3))
|
||||
|
||||
except mvsdk.CameraException as e:
|
||||
print(f"Failed to get initial frame: {e.message}")
|
||||
return False
|
||||
|
||||
# Initialize video writer
|
||||
fourcc = cv2.VideoWriter_fourcc(*"XVID")
|
||||
fps = getattr(self, "target_fps", 3.0) # Use configured FPS or default to 3.0
|
||||
frame_size = (FrameHead.iWidth, FrameHead.iHeight)
|
||||
|
||||
self.video_writer = cv2.VideoWriter(output_filename, fourcc, fps, frame_size)
|
||||
|
||||
if not self.video_writer.isOpened():
|
||||
print(f"Failed to open video writer for {output_filename}")
|
||||
return False
|
||||
|
||||
self.recording = True
|
||||
self.frame_count = 0
|
||||
self.start_time = time.time()
|
||||
self.output_filename = output_filename
|
||||
|
||||
print(f"Started recording to: {output_filename}")
|
||||
print(f"Frame size: {frame_size}, FPS: {fps}")
|
||||
print("Press 'q' to stop recording...")
|
||||
|
||||
return True
|
||||
|
||||
def stop_recording(self):
|
||||
"""Stop video recording"""
|
||||
if not self.recording:
|
||||
print("Not currently recording!")
|
||||
return False
|
||||
|
||||
self.recording = False
|
||||
|
||||
if self.video_writer:
|
||||
self.video_writer.release()
|
||||
self.video_writer = None
|
||||
|
||||
duration = time.time() - self.start_time if self.start_time else 0
|
||||
avg_fps = self.frame_count / duration if duration > 0 else 0
|
||||
|
||||
print(f"\nRecording stopped!")
|
||||
print(f"Saved: {self.output_filename}")
|
||||
print(f"Frames recorded: {self.frame_count}")
|
||||
print(f"Duration: {duration:.1f} seconds")
|
||||
print(f"Average FPS: {avg_fps:.1f}")
|
||||
|
||||
return True
|
||||
|
||||
def record_loop(self):
|
||||
"""Main recording loop"""
|
||||
if not self.recording:
|
||||
return
|
||||
|
||||
print("Recording... Press 'q' in the preview window to stop")
|
||||
|
||||
while self.recording:
|
||||
try:
|
||||
# Get frame from camera
|
||||
pRawData, FrameHead = mvsdk.CameraGetImageBuffer(self.hCamera, 200)
|
||||
mvsdk.CameraImageProcess(self.hCamera, pRawData, self.pFrameBuffer, FrameHead)
|
||||
mvsdk.CameraReleaseImageBuffer(self.hCamera, pRawData)
|
||||
|
||||
# Handle Windows frame flipping
|
||||
if platform.system() == "Windows":
|
||||
mvsdk.CameraFlipFrameBuffer(self.pFrameBuffer, FrameHead, 1)
|
||||
|
||||
# Convert to numpy array
|
||||
frame_data = (mvsdk.c_ubyte * FrameHead.uBytes).from_address(self.pFrameBuffer)
|
||||
frame = np.frombuffer(frame_data, dtype=np.uint8)
|
||||
|
||||
if self.monoCamera:
|
||||
frame = frame.reshape((FrameHead.iHeight, FrameHead.iWidth))
|
||||
frame_bgr = cv2.cvtColor(frame, cv2.COLOR_GRAY2BGR)
|
||||
else:
|
||||
frame = frame.reshape((FrameHead.iHeight, FrameHead.iWidth, 3))
|
||||
frame_bgr = frame
|
||||
|
||||
# Write every frame to video (FPS is controlled by video file playback rate)
|
||||
if self.video_writer and self.recording:
|
||||
self.video_writer.write(frame_bgr)
|
||||
self.frame_count += 1
|
||||
|
||||
# Show preview (resized for display)
|
||||
display_frame = cv2.resize(frame_bgr, (640, 480), interpolation=cv2.INTER_LINEAR)
|
||||
|
||||
# Add small delay to control capture rate based on target FPS
|
||||
target_fps = getattr(self, "target_fps", 3.0)
|
||||
time.sleep(1.0 / target_fps)
|
||||
|
||||
# Add recording indicator
|
||||
cv2.putText(display_frame, f"REC - Frame: {self.frame_count}", (10, 30), cv2.FONT_HERSHEY_SIMPLEX, 0.7, (0, 0, 255), 2)
|
||||
|
||||
cv2.imshow("Camera Recording - Press 'q' to stop", display_frame)
|
||||
|
||||
# Check for quit key
|
||||
if cv2.waitKey(1) & 0xFF == ord("q"):
|
||||
self.stop_recording()
|
||||
break
|
||||
|
||||
except mvsdk.CameraException as e:
|
||||
if e.error_code != mvsdk.CAMERA_STATUS_TIME_OUT:
|
||||
print(f"Camera error: {e.message}")
|
||||
break
|
||||
|
||||
def cleanup(self):
|
||||
"""Clean up resources"""
|
||||
if self.recording:
|
||||
self.stop_recording()
|
||||
|
||||
if self.video_writer:
|
||||
self.video_writer.release()
|
||||
|
||||
if self.hCamera > 0:
|
||||
mvsdk.CameraUnInit(self.hCamera)
|
||||
self.hCamera = 0
|
||||
|
||||
if self.pFrameBuffer:
|
||||
mvsdk.CameraAlignFree(self.pFrameBuffer)
|
||||
self.pFrameBuffer = 0
|
||||
|
||||
cv2.destroyAllWindows()
|
||||
|
||||
|
||||
def interactive_menu():
|
||||
"""Interactive menu for camera operations"""
|
||||
recorder = CameraVideoRecorder()
|
||||
|
||||
try:
|
||||
# List available cameras
|
||||
cameras = recorder.list_cameras()
|
||||
if not cameras:
|
||||
return
|
||||
|
||||
# Select camera
|
||||
if len(cameras) == 1:
|
||||
selected_camera = cameras[0]
|
||||
print(f"\nUsing camera: {selected_camera['name']}")
|
||||
else:
|
||||
while True:
|
||||
try:
|
||||
choice = int(input(f"\nSelect camera (0-{len(cameras)-1}): "))
|
||||
if 0 <= choice < len(cameras):
|
||||
selected_camera = cameras[choice]
|
||||
break
|
||||
else:
|
||||
print("Invalid selection!")
|
||||
except ValueError:
|
||||
print("Please enter a valid number!")
|
||||
|
||||
# Get camera settings from user
|
||||
print(f"\nCamera Settings:")
|
||||
try:
|
||||
exposure = float(input("Enter exposure time in ms (default 1.0): ") or "1.0")
|
||||
gain = float(input("Enter gain value (default 3.5): ") or "3.5")
|
||||
fps = float(input("Enter target FPS (default 3.0): ") or "3.0")
|
||||
except ValueError:
|
||||
print("Using default values: exposure=1.0ms, gain=3.5x, fps=3.0")
|
||||
exposure, gain, fps = 1.0, 3.5, 3.0
|
||||
|
||||
# Initialize camera with specified settings
|
||||
print(f"\nInitializing camera with:")
|
||||
print(f"- Exposure: {exposure}ms")
|
||||
print(f"- Gain: {gain}x")
|
||||
print(f"- Target FPS: {fps}")
|
||||
|
||||
if not recorder.initialize_camera(selected_camera["dev_info"], exposure_ms=exposure, gain=gain, target_fps=fps):
|
||||
return
|
||||
|
||||
# Menu loop
|
||||
while True:
|
||||
print(f"\n{'='*50}")
|
||||
print("Camera Video Recorder Menu")
|
||||
print(f"{'='*50}")
|
||||
print("1. Start Recording")
|
||||
print("2. List Camera Info")
|
||||
print("3. Test Camera (Live Preview)")
|
||||
print("4. Exit")
|
||||
|
||||
try:
|
||||
choice = input("\nSelect option (1-4): ").strip()
|
||||
|
||||
if choice == "1":
|
||||
# Start recording
|
||||
timestamp = datetime.now().strftime("%Y%m%d_%H%M%S")
|
||||
output_file = f"videos/camera_recording_{timestamp}.avi"
|
||||
|
||||
# Create videos directory
|
||||
os.makedirs("videos", exist_ok=True)
|
||||
|
||||
if recorder.start_recording(output_file):
|
||||
recorder.record_loop()
|
||||
|
||||
elif choice == "2":
|
||||
# Show camera info
|
||||
print(f"\nCamera Information:")
|
||||
print(f"Name: {selected_camera['name']}")
|
||||
print(f"Port Type: {selected_camera['port_type']}")
|
||||
print(f"Serial Number: {selected_camera['serial']}")
|
||||
print(f"Type: {'Monochrome' if recorder.monoCamera else 'Color'}")
|
||||
|
||||
elif choice == "3":
|
||||
# Live preview
|
||||
print("\nLive Preview - Press 'q' to stop")
|
||||
preview_loop(recorder)
|
||||
|
||||
elif choice == "4":
|
||||
print("Exiting...")
|
||||
break
|
||||
|
||||
else:
|
||||
print("Invalid option! Please select 1-4.")
|
||||
|
||||
except KeyboardInterrupt:
|
||||
print("\nReturning to menu...")
|
||||
continue
|
||||
|
||||
except KeyboardInterrupt:
|
||||
print("\nInterrupted by user")
|
||||
except Exception as e:
|
||||
print(f"Error: {e}")
|
||||
import traceback
|
||||
|
||||
traceback.print_exc()
|
||||
finally:
|
||||
recorder.cleanup()
|
||||
print("Cleanup completed")
|
||||
|
||||
|
||||
def preview_loop(recorder):
|
||||
"""Live preview without recording"""
|
||||
print("Live preview mode - Press 'q' to return to menu")
|
||||
|
||||
while True:
|
||||
try:
|
||||
# Get frame from camera
|
||||
pRawData, FrameHead = mvsdk.CameraGetImageBuffer(recorder.hCamera, 200)
|
||||
mvsdk.CameraImageProcess(recorder.hCamera, pRawData, recorder.pFrameBuffer, FrameHead)
|
||||
mvsdk.CameraReleaseImageBuffer(recorder.hCamera, pRawData)
|
||||
|
||||
# Handle Windows frame flipping
|
||||
if platform.system() == "Windows":
|
||||
mvsdk.CameraFlipFrameBuffer(recorder.pFrameBuffer, FrameHead, 1)
|
||||
|
||||
# Convert to numpy array
|
||||
frame_data = (mvsdk.c_ubyte * FrameHead.uBytes).from_address(recorder.pFrameBuffer)
|
||||
frame = np.frombuffer(frame_data, dtype=np.uint8)
|
||||
|
||||
if recorder.monoCamera:
|
||||
frame = frame.reshape((FrameHead.iHeight, FrameHead.iWidth))
|
||||
frame_bgr = cv2.cvtColor(frame, cv2.COLOR_GRAY2BGR)
|
||||
else:
|
||||
frame = frame.reshape((FrameHead.iHeight, FrameHead.iWidth, 3))
|
||||
frame_bgr = frame
|
||||
|
||||
# Show preview (resized for display)
|
||||
display_frame = cv2.resize(frame_bgr, (640, 480), interpolation=cv2.INTER_LINEAR)
|
||||
|
||||
# Add info overlay
|
||||
cv2.putText(display_frame, f"PREVIEW - {FrameHead.iWidth}x{FrameHead.iHeight}", (10, 30), cv2.FONT_HERSHEY_SIMPLEX, 0.7, (0, 255, 0), 2)
|
||||
cv2.putText(display_frame, "Press 'q' to return to menu", (10, display_frame.shape[0] - 20), cv2.FONT_HERSHEY_SIMPLEX, 0.5, (255, 255, 255), 1)
|
||||
|
||||
cv2.imshow("Camera Preview", display_frame)
|
||||
|
||||
# Check for quit key
|
||||
if cv2.waitKey(1) & 0xFF == ord("q"):
|
||||
cv2.destroyWindow("Camera Preview")
|
||||
break
|
||||
|
||||
except mvsdk.CameraException as e:
|
||||
if e.error_code != mvsdk.CAMERA_STATUS_TIME_OUT:
|
||||
print(f"Camera error: {e.message}")
|
||||
break
|
||||
|
||||
|
||||
def main():
|
||||
print("Camera Video Recorder")
|
||||
print("====================")
|
||||
print("This script allows you to:")
|
||||
print("- List all available cameras")
|
||||
print("- Record videos with custom exposure (1ms), gain (3.5x), and FPS (3.0) settings")
|
||||
print("- Save videos with timestamps")
|
||||
print("- Stop recording anytime with 'q' key")
|
||||
print()
|
||||
|
||||
interactive_menu()
|
||||
|
||||
|
||||
if __name__ == "__main__":
|
||||
main()
|
||||
426
old tests/exposure test.ipynb
Normal file
426
old tests/exposure test.ipynb
Normal file
@@ -0,0 +1,426 @@
|
||||
{
|
||||
"cells": [
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": 25,
|
||||
"id": "ba958c88",
|
||||
"metadata": {},
|
||||
"outputs": [],
|
||||
"source": [
|
||||
"# coding=utf-8\n",
|
||||
"\"\"\"\n",
|
||||
"Test script to help find optimal exposure settings for your GigE camera.\n",
|
||||
"This script captures a single test image with different exposure settings.\n",
|
||||
"\"\"\"\n",
|
||||
"import sys\n",
|
||||
"\n",
|
||||
"sys.path.append(\"./python demo\")\n",
|
||||
"import os\n",
|
||||
"import mvsdk\n",
|
||||
"import numpy as np\n",
|
||||
"import cv2\n",
|
||||
"import platform\n",
|
||||
"from datetime import datetime\n",
|
||||
"\n",
|
||||
"# Add the python demo directory to path\n"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": 26,
|
||||
"id": "23f1dc49",
|
||||
"metadata": {},
|
||||
"outputs": [],
|
||||
"source": [
|
||||
"def test_exposure_settings():\n",
|
||||
" \"\"\"\n",
|
||||
" Test different exposure settings to find optimal values\n",
|
||||
" \"\"\"\n",
|
||||
" # Initialize SDK\n",
|
||||
" try:\n",
|
||||
" mvsdk.CameraSdkInit(1)\n",
|
||||
" print(\"SDK initialized successfully\")\n",
|
||||
" except Exception as e:\n",
|
||||
" print(f\"SDK initialization failed: {e}\")\n",
|
||||
" return False\n",
|
||||
"\n",
|
||||
" # Enumerate cameras\n",
|
||||
" DevList = mvsdk.CameraEnumerateDevice()\n",
|
||||
" nDev = len(DevList)\n",
|
||||
"\n",
|
||||
" if nDev < 1:\n",
|
||||
" print(\"No camera was found!\")\n",
|
||||
" return False\n",
|
||||
"\n",
|
||||
" print(f\"Found {nDev} camera(s):\")\n",
|
||||
" for i, DevInfo in enumerate(DevList):\n",
|
||||
" print(f\" {i}: {DevInfo.GetFriendlyName()} ({DevInfo.GetPortType()})\")\n",
|
||||
"\n",
|
||||
" # Use first camera\n",
|
||||
" DevInfo = DevList[0]\n",
|
||||
" print(f\"\\nSelected camera: {DevInfo.GetFriendlyName()}\")\n",
|
||||
"\n",
|
||||
" # Initialize camera\n",
|
||||
" try:\n",
|
||||
" hCamera = mvsdk.CameraInit(DevInfo, -1, -1)\n",
|
||||
" print(\"Camera initialized successfully\")\n",
|
||||
" except mvsdk.CameraException as e:\n",
|
||||
" print(f\"CameraInit Failed({e.error_code}): {e.message}\")\n",
|
||||
" return False\n",
|
||||
"\n",
|
||||
" try:\n",
|
||||
" # Get camera capabilities\n",
|
||||
" cap = mvsdk.CameraGetCapability(hCamera)\n",
|
||||
" monoCamera = cap.sIspCapacity.bMonoSensor != 0\n",
|
||||
" print(f\"Camera type: {'Monochrome' if monoCamera else 'Color'}\")\n",
|
||||
"\n",
|
||||
" # Get camera ranges\n",
|
||||
" try:\n",
|
||||
" exp_min, exp_max, exp_step = mvsdk.CameraGetExposureTimeRange(hCamera)\n",
|
||||
" print(f\"Exposure time range: {exp_min:.1f} - {exp_max:.1f} μs\")\n",
|
||||
"\n",
|
||||
" gain_min, gain_max, gain_step = mvsdk.CameraGetAnalogGainXRange(hCamera)\n",
|
||||
" print(f\"Analog gain range: {gain_min:.2f} - {gain_max:.2f}x\")\n",
|
||||
"\n",
|
||||
" print(\"whatever this is: \", mvsdk.CameraGetAnalogGainXRange(hCamera))\n",
|
||||
" except Exception as e:\n",
|
||||
" print(f\"Could not get camera ranges: {e}\")\n",
|
||||
" exp_min, exp_max = 100, 100000\n",
|
||||
" gain_min, gain_max = 1.0, 4.0\n",
|
||||
"\n",
|
||||
" # Set output format\n",
|
||||
" if monoCamera:\n",
|
||||
" mvsdk.CameraSetIspOutFormat(hCamera, mvsdk.CAMERA_MEDIA_TYPE_MONO8)\n",
|
||||
" else:\n",
|
||||
" mvsdk.CameraSetIspOutFormat(hCamera, mvsdk.CAMERA_MEDIA_TYPE_BGR8)\n",
|
||||
"\n",
|
||||
" # Set camera to continuous capture mode\n",
|
||||
" mvsdk.CameraSetTriggerMode(hCamera, 0)\n",
|
||||
" mvsdk.CameraSetAeState(hCamera, 0) # Disable auto exposure\n",
|
||||
"\n",
|
||||
" # Start camera\n",
|
||||
" mvsdk.CameraPlay(hCamera)\n",
|
||||
"\n",
|
||||
" # Allocate frame buffer\n",
|
||||
" FrameBufferSize = cap.sResolutionRange.iWidthMax * cap.sResolutionRange.iHeightMax * (1 if monoCamera else 3)\n",
|
||||
" pFrameBuffer = mvsdk.CameraAlignMalloc(FrameBufferSize, 16)\n",
|
||||
"\n",
|
||||
" # Create test directory\n",
|
||||
" if not os.path.exists(\"exposure_tests\"):\n",
|
||||
" os.makedirs(\"exposure_tests\")\n",
|
||||
"\n",
|
||||
" print(\"\\nTesting different exposure settings...\")\n",
|
||||
" print(\"=\" * 50)\n",
|
||||
"\n",
|
||||
" # Test different exposure times (in microseconds)\n",
|
||||
" exposure_times = [100, 200, 500, 1000, 2000, 5000, 10000, 20000] # 0.5ms to 20ms\n",
|
||||
" analog_gains = [2.5, 5.0, 10.0, 16.0] # Start with 1x gain\n",
|
||||
"\n",
|
||||
" test_count = 0\n",
|
||||
" for exp_time in exposure_times:\n",
|
||||
" for gain in analog_gains:\n",
|
||||
" # Clamp values to valid ranges\n",
|
||||
" exp_time = max(exp_min, min(exp_max, exp_time))\n",
|
||||
" gain = max(gain_min, min(gain_max, gain))\n",
|
||||
"\n",
|
||||
" print(f\"\\nTest {test_count + 1}: Exposure={exp_time/1000:.1f}ms, Gain={gain:.1f}x\")\n",
|
||||
"\n",
|
||||
" # Set camera parameters\n",
|
||||
" mvsdk.CameraSetExposureTime(hCamera, exp_time)\n",
|
||||
" try:\n",
|
||||
" mvsdk.CameraSetAnalogGainX(hCamera, gain)\n",
|
||||
" except:\n",
|
||||
" pass # Some cameras might not support this\n",
|
||||
"\n",
|
||||
" # Wait a moment for settings to take effect\n",
|
||||
" import time\n",
|
||||
"\n",
|
||||
" time.sleep(0.1)\n",
|
||||
"\n",
|
||||
" # Capture image\n",
|
||||
" try:\n",
|
||||
" pRawData, FrameHead = mvsdk.CameraGetImageBuffer(hCamera, 2000)\n",
|
||||
" mvsdk.CameraImageProcess(hCamera, pRawData, pFrameBuffer, FrameHead)\n",
|
||||
" mvsdk.CameraReleaseImageBuffer(hCamera, pRawData)\n",
|
||||
"\n",
|
||||
" # Handle Windows image flip\n",
|
||||
" if platform.system() == \"Windows\":\n",
|
||||
" mvsdk.CameraFlipFrameBuffer(pFrameBuffer, FrameHead, 1)\n",
|
||||
"\n",
|
||||
" # Convert to numpy array\n",
|
||||
" frame_data = (mvsdk.c_ubyte * FrameHead.uBytes).from_address(pFrameBuffer)\n",
|
||||
" frame = np.frombuffer(frame_data, dtype=np.uint8)\n",
|
||||
"\n",
|
||||
" if FrameHead.uiMediaType == mvsdk.CAMERA_MEDIA_TYPE_MONO8:\n",
|
||||
" frame = frame.reshape((FrameHead.iHeight, FrameHead.iWidth))\n",
|
||||
" else:\n",
|
||||
" frame = frame.reshape((FrameHead.iHeight, FrameHead.iWidth, 3))\n",
|
||||
"\n",
|
||||
" # Calculate image statistics\n",
|
||||
" mean_brightness = np.mean(frame)\n",
|
||||
" max_brightness = np.max(frame)\n",
|
||||
"\n",
|
||||
" # Save image\n",
|
||||
" filename = f\"exposure_tests/test_{test_count+1:02d}_exp{exp_time/1000:.1f}ms_gain{gain:.1f}x.jpg\"\n",
|
||||
" cv2.imwrite(filename, frame)\n",
|
||||
"\n",
|
||||
" # Provide feedback\n",
|
||||
" status = \"\"\n",
|
||||
" if mean_brightness < 50:\n",
|
||||
" status = \"TOO DARK\"\n",
|
||||
" elif mean_brightness > 200:\n",
|
||||
" status = \"TOO BRIGHT\"\n",
|
||||
" elif max_brightness >= 255:\n",
|
||||
" status = \"OVEREXPOSED\"\n",
|
||||
" else:\n",
|
||||
" status = \"GOOD\"\n",
|
||||
"\n",
|
||||
" print(f\" → Saved: {filename}\")\n",
|
||||
" print(f\" → Brightness: mean={mean_brightness:.1f}, max={max_brightness:.1f} [{status}]\")\n",
|
||||
"\n",
|
||||
" test_count += 1\n",
|
||||
"\n",
|
||||
" except mvsdk.CameraException as e:\n",
|
||||
" print(f\" → Failed to capture: {e.message}\")\n",
|
||||
"\n",
|
||||
" print(f\"\\nCompleted {test_count} test captures!\")\n",
|
||||
" print(\"Check the 'exposure_tests' directory to see the results.\")\n",
|
||||
" print(\"\\nRecommendations:\")\n",
|
||||
" print(\"- Look for images marked as 'GOOD' - these have optimal exposure\")\n",
|
||||
" print(\"- If all images are 'TOO BRIGHT', try lower exposure times or gains\")\n",
|
||||
" print(\"- If all images are 'TOO DARK', try higher exposure times or gains\")\n",
|
||||
" print(\"- Avoid 'OVEREXPOSED' images as they have clipped highlights\")\n",
|
||||
"\n",
|
||||
" # Cleanup\n",
|
||||
" mvsdk.CameraAlignFree(pFrameBuffer)\n",
|
||||
"\n",
|
||||
" finally:\n",
|
||||
" # Close camera\n",
|
||||
" mvsdk.CameraUnInit(hCamera)\n",
|
||||
" print(\"\\nCamera closed\")\n",
|
||||
"\n",
|
||||
" return True"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": 27,
|
||||
"id": "2891b5bf",
|
||||
"metadata": {},
|
||||
"outputs": [
|
||||
{
|
||||
"name": "stdout",
|
||||
"output_type": "stream",
|
||||
"text": [
|
||||
"GigE Camera Exposure Test Script\n",
|
||||
"========================================\n",
|
||||
"This script will test different exposure settings and save sample images.\n",
|
||||
"Use this to find the optimal settings for your lighting conditions.\n",
|
||||
"\n",
|
||||
"SDK initialized successfully\n",
|
||||
"Found 2 camera(s):\n",
|
||||
" 0: Blower-Yield-Cam (NET-100M-192.168.1.204)\n",
|
||||
" 1: Cracker-Cam (NET-1000M-192.168.1.246)\n",
|
||||
"\n",
|
||||
"Selected camera: Blower-Yield-Cam\n",
|
||||
"Camera initialized successfully\n",
|
||||
"Camera type: Color\n",
|
||||
"Exposure time range: 8.0 - 1048568.0 μs\n",
|
||||
"Analog gain range: 2.50 - 16.50x\n",
|
||||
"whatever this is: (2.5, 16.5, 0.5)\n",
|
||||
"\n",
|
||||
"Testing different exposure settings...\n",
|
||||
"==================================================\n",
|
||||
"\n",
|
||||
"Test 1: Exposure=0.1ms, Gain=2.5x\n",
|
||||
" → Saved: exposure_tests/test_01_exp0.1ms_gain2.5x.jpg\n",
|
||||
" → Brightness: mean=94.1, max=255.0 [OVEREXPOSED]\n",
|
||||
"\n",
|
||||
"Test 2: Exposure=0.1ms, Gain=5.0x\n",
|
||||
" → Saved: exposure_tests/test_02_exp0.1ms_gain5.0x.jpg\n",
|
||||
" → Brightness: mean=13.7, max=173.0 [TOO DARK]\n",
|
||||
"\n",
|
||||
"Test 3: Exposure=0.1ms, Gain=10.0x\n",
|
||||
" → Saved: exposure_tests/test_03_exp0.1ms_gain10.0x.jpg\n",
|
||||
" → Brightness: mean=14.1, max=255.0 [TOO DARK]\n",
|
||||
"\n",
|
||||
"Test 4: Exposure=0.1ms, Gain=16.0x\n",
|
||||
" → Saved: exposure_tests/test_04_exp0.1ms_gain16.0x.jpg\n",
|
||||
" → Brightness: mean=18.2, max=255.0 [TOO DARK]\n",
|
||||
"\n",
|
||||
"Test 5: Exposure=0.2ms, Gain=2.5x\n",
|
||||
" → Saved: exposure_tests/test_05_exp0.2ms_gain2.5x.jpg\n",
|
||||
" → Brightness: mean=22.1, max=255.0 [TOO DARK]\n",
|
||||
"\n",
|
||||
"Test 6: Exposure=0.2ms, Gain=5.0x\n",
|
||||
" → Saved: exposure_tests/test_06_exp0.2ms_gain5.0x.jpg\n",
|
||||
" → Brightness: mean=19.5, max=255.0 [TOO DARK]\n",
|
||||
"\n",
|
||||
"Test 7: Exposure=0.2ms, Gain=10.0x\n",
|
||||
" → Saved: exposure_tests/test_07_exp0.2ms_gain10.0x.jpg\n",
|
||||
" → Brightness: mean=25.3, max=255.0 [TOO DARK]\n",
|
||||
"\n",
|
||||
"Test 8: Exposure=0.2ms, Gain=16.0x\n",
|
||||
" → Saved: exposure_tests/test_08_exp0.2ms_gain16.0x.jpg\n",
|
||||
" → Brightness: mean=36.6, max=255.0 [TOO DARK]\n",
|
||||
"\n",
|
||||
"Test 9: Exposure=0.5ms, Gain=2.5x\n",
|
||||
" → Saved: exposure_tests/test_09_exp0.5ms_gain2.5x.jpg\n",
|
||||
" → Brightness: mean=55.8, max=255.0 [OVEREXPOSED]\n",
|
||||
"\n",
|
||||
"Test 10: Exposure=0.5ms, Gain=5.0x\n",
|
||||
" → Saved: exposure_tests/test_10_exp0.5ms_gain5.0x.jpg\n",
|
||||
" → Brightness: mean=38.5, max=255.0 [TOO DARK]\n",
|
||||
"\n",
|
||||
"Test 11: Exposure=0.5ms, Gain=10.0x\n",
|
||||
" → Saved: exposure_tests/test_11_exp0.5ms_gain10.0x.jpg\n",
|
||||
" → Brightness: mean=60.2, max=255.0 [OVEREXPOSED]\n",
|
||||
"\n",
|
||||
"Test 12: Exposure=0.5ms, Gain=16.0x\n",
|
||||
" → Saved: exposure_tests/test_12_exp0.5ms_gain16.0x.jpg\n",
|
||||
" → Brightness: mean=99.3, max=255.0 [OVEREXPOSED]\n",
|
||||
"\n",
|
||||
"Test 13: Exposure=1.0ms, Gain=2.5x\n",
|
||||
" → Saved: exposure_tests/test_13_exp1.0ms_gain2.5x.jpg\n",
|
||||
" → Brightness: mean=121.1, max=255.0 [OVEREXPOSED]\n",
|
||||
"\n",
|
||||
"Test 14: Exposure=1.0ms, Gain=5.0x\n",
|
||||
" → Saved: exposure_tests/test_14_exp1.0ms_gain5.0x.jpg\n",
|
||||
" → Brightness: mean=68.8, max=255.0 [OVEREXPOSED]\n",
|
||||
"\n",
|
||||
"Test 15: Exposure=1.0ms, Gain=10.0x\n",
|
||||
" → Saved: exposure_tests/test_15_exp1.0ms_gain10.0x.jpg\n",
|
||||
" → Brightness: mean=109.6, max=255.0 [OVEREXPOSED]\n",
|
||||
"\n",
|
||||
"Test 16: Exposure=1.0ms, Gain=16.0x\n",
|
||||
" → Saved: exposure_tests/test_16_exp1.0ms_gain16.0x.jpg\n",
|
||||
" → Brightness: mean=148.7, max=255.0 [OVEREXPOSED]\n",
|
||||
"\n",
|
||||
"Test 17: Exposure=2.0ms, Gain=2.5x\n",
|
||||
" → Saved: exposure_tests/test_17_exp2.0ms_gain2.5x.jpg\n",
|
||||
" → Brightness: mean=171.9, max=255.0 [OVEREXPOSED]\n",
|
||||
"\n",
|
||||
"Test 18: Exposure=2.0ms, Gain=5.0x\n",
|
||||
" → Saved: exposure_tests/test_18_exp2.0ms_gain5.0x.jpg\n",
|
||||
" → Brightness: mean=117.9, max=255.0 [OVEREXPOSED]\n",
|
||||
"\n",
|
||||
"Test 19: Exposure=2.0ms, Gain=10.0x\n",
|
||||
" → Saved: exposure_tests/test_19_exp2.0ms_gain10.0x.jpg\n",
|
||||
" → Brightness: mean=159.0, max=255.0 [OVEREXPOSED]\n",
|
||||
"\n",
|
||||
"Test 20: Exposure=2.0ms, Gain=16.0x\n",
|
||||
" → Saved: exposure_tests/test_20_exp2.0ms_gain16.0x.jpg\n",
|
||||
" → Brightness: mean=195.7, max=255.0 [OVEREXPOSED]\n",
|
||||
"\n",
|
||||
"Test 21: Exposure=5.0ms, Gain=2.5x\n",
|
||||
" → Saved: exposure_tests/test_21_exp5.0ms_gain2.5x.jpg\n",
|
||||
" → Brightness: mean=214.6, max=255.0 [TOO BRIGHT]\n",
|
||||
"\n",
|
||||
"Test 22: Exposure=5.0ms, Gain=5.0x\n",
|
||||
" → Saved: exposure_tests/test_22_exp5.0ms_gain5.0x.jpg\n",
|
||||
" → Brightness: mean=180.2, max=255.0 [OVEREXPOSED]\n",
|
||||
"\n",
|
||||
"Test 23: Exposure=5.0ms, Gain=10.0x\n",
|
||||
" → Saved: exposure_tests/test_23_exp5.0ms_gain10.0x.jpg\n",
|
||||
" → Brightness: mean=214.6, max=255.0 [TOO BRIGHT]\n",
|
||||
"\n",
|
||||
"Test 24: Exposure=5.0ms, Gain=16.0x\n",
|
||||
" → Saved: exposure_tests/test_24_exp5.0ms_gain16.0x.jpg\n",
|
||||
" → Brightness: mean=239.6, max=255.0 [TOO BRIGHT]\n",
|
||||
"\n",
|
||||
"Test 25: Exposure=10.0ms, Gain=2.5x\n",
|
||||
" → Saved: exposure_tests/test_25_exp10.0ms_gain2.5x.jpg\n",
|
||||
" → Brightness: mean=247.5, max=255.0 [TOO BRIGHT]\n",
|
||||
"\n",
|
||||
"Test 26: Exposure=10.0ms, Gain=5.0x\n",
|
||||
" → Saved: exposure_tests/test_26_exp10.0ms_gain5.0x.jpg\n",
|
||||
" → Brightness: mean=252.4, max=255.0 [TOO BRIGHT]\n",
|
||||
"\n",
|
||||
"Test 27: Exposure=10.0ms, Gain=10.0x\n",
|
||||
" → Saved: exposure_tests/test_27_exp10.0ms_gain10.0x.jpg\n",
|
||||
" → Brightness: mean=218.9, max=255.0 [TOO BRIGHT]\n",
|
||||
"\n",
|
||||
"Test 28: Exposure=10.0ms, Gain=16.0x\n",
|
||||
" → Saved: exposure_tests/test_28_exp10.0ms_gain16.0x.jpg\n",
|
||||
" → Brightness: mean=250.8, max=255.0 [TOO BRIGHT]\n",
|
||||
"\n",
|
||||
"Test 29: Exposure=20.0ms, Gain=2.5x\n",
|
||||
" → Saved: exposure_tests/test_29_exp20.0ms_gain2.5x.jpg\n",
|
||||
" → Brightness: mean=252.4, max=255.0 [TOO BRIGHT]\n",
|
||||
"\n",
|
||||
"Test 30: Exposure=20.0ms, Gain=5.0x\n",
|
||||
" → Saved: exposure_tests/test_30_exp20.0ms_gain5.0x.jpg\n",
|
||||
" → Brightness: mean=244.4, max=255.0 [TOO BRIGHT]\n",
|
||||
"\n",
|
||||
"Test 31: Exposure=20.0ms, Gain=10.0x\n",
|
||||
" → Saved: exposure_tests/test_31_exp20.0ms_gain10.0x.jpg\n",
|
||||
" → Brightness: mean=251.5, max=255.0 [TOO BRIGHT]\n",
|
||||
"\n",
|
||||
"Test 32: Exposure=20.0ms, Gain=16.0x\n",
|
||||
" → Saved: exposure_tests/test_32_exp20.0ms_gain16.0x.jpg\n",
|
||||
" → Brightness: mean=253.4, max=255.0 [TOO BRIGHT]\n",
|
||||
"\n",
|
||||
"Completed 32 test captures!\n",
|
||||
"Check the 'exposure_tests' directory to see the results.\n",
|
||||
"\n",
|
||||
"Recommendations:\n",
|
||||
"- Look for images marked as 'GOOD' - these have optimal exposure\n",
|
||||
"- If all images are 'TOO BRIGHT', try lower exposure times or gains\n",
|
||||
"- If all images are 'TOO DARK', try higher exposure times or gains\n",
|
||||
"- Avoid 'OVEREXPOSED' images as they have clipped highlights\n",
|
||||
"\n",
|
||||
"Camera closed\n",
|
||||
"\n",
|
||||
"Testing completed successfully!\n"
|
||||
]
|
||||
}
|
||||
],
|
||||
"source": [
|
||||
"\n",
|
||||
"\n",
|
||||
"if __name__ == \"__main__\":\n",
|
||||
" print(\"GigE Camera Exposure Test Script\")\n",
|
||||
" print(\"=\" * 40)\n",
|
||||
" print(\"This script will test different exposure settings and save sample images.\")\n",
|
||||
" print(\"Use this to find the optimal settings for your lighting conditions.\")\n",
|
||||
" print()\n",
|
||||
"\n",
|
||||
" success = test_exposure_settings()\n",
|
||||
"\n",
|
||||
" if success:\n",
|
||||
" print(\"\\nTesting completed successfully!\")\n",
|
||||
" else:\n",
|
||||
" print(\"\\nTesting failed!\")\n"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": null,
|
||||
"id": "ead8d889",
|
||||
"metadata": {},
|
||||
"outputs": [],
|
||||
"source": []
|
||||
}
|
||||
],
|
||||
"metadata": {
|
||||
"kernelspec": {
|
||||
"display_name": "cc_pecan",
|
||||
"language": "python",
|
||||
"name": "python3"
|
||||
},
|
||||
"language_info": {
|
||||
"codemirror_mode": {
|
||||
"name": "ipython",
|
||||
"version": 3
|
||||
},
|
||||
"file_extension": ".py",
|
||||
"mimetype": "text/x-python",
|
||||
"name": "python",
|
||||
"nbconvert_exporter": "python",
|
||||
"pygments_lexer": "ipython3",
|
||||
"version": "3.13.5"
|
||||
}
|
||||
},
|
||||
"nbformat": 4,
|
||||
"nbformat_minor": 5
|
||||
}
|
||||
385
old tests/gige_camera_advanced.ipynb
Normal file
385
old tests/gige_camera_advanced.ipynb
Normal file
@@ -0,0 +1,385 @@
|
||||
{
|
||||
"cells": [
|
||||
{
|
||||
"cell_type": "markdown",
|
||||
"metadata": {},
|
||||
"source": [
|
||||
"# Advanced GigE Camera Configuration\n",
|
||||
"\n",
|
||||
"This notebook provides advanced testing and configuration for GigE cameras.\n",
|
||||
"\n",
|
||||
"## Features:\n",
|
||||
"- Network interface detection\n",
|
||||
"- GigE camera discovery\n",
|
||||
"- Camera parameter configuration\n",
|
||||
"- Performance testing\n",
|
||||
"- Dual camera synchronization testing"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": null,
|
||||
"metadata": {},
|
||||
"outputs": [],
|
||||
"source": [
|
||||
"import cv2\n",
|
||||
"import numpy as np\n",
|
||||
"import matplotlib.pyplot as plt\n",
|
||||
"import subprocess\n",
|
||||
"import socket\n",
|
||||
"import threading\n",
|
||||
"import time\n",
|
||||
"from datetime import datetime\n",
|
||||
"import os\n",
|
||||
"from pathlib import Path\n",
|
||||
"import json\n",
|
||||
"\n",
|
||||
"print(\"✅ Imports successful!\")"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "markdown",
|
||||
"metadata": {},
|
||||
"source": [
|
||||
"## Network Interface Detection"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": null,
|
||||
"metadata": {},
|
||||
"outputs": [],
|
||||
"source": [
|
||||
"def get_network_interfaces():\n",
|
||||
" \"\"\"Get network interface information\"\"\"\n",
|
||||
" try:\n",
|
||||
" result = subprocess.run(['ip', 'addr', 'show'], capture_output=True, text=True)\n",
|
||||
" print(\"🌐 Network Interfaces:\")\n",
|
||||
" print(result.stdout)\n",
|
||||
" \n",
|
||||
" # Also check for GigE specific interfaces\n",
|
||||
" result2 = subprocess.run(['ifconfig'], capture_output=True, text=True)\n",
|
||||
" if result2.returncode == 0:\n",
|
||||
" print(\"\\n📡 Interface Configuration:\")\n",
|
||||
" print(result2.stdout)\n",
|
||||
" except Exception as e:\n",
|
||||
" print(f\"❌ Error getting network info: {e}\")\n",
|
||||
"\n",
|
||||
"get_network_interfaces()"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "markdown",
|
||||
"metadata": {},
|
||||
"source": [
|
||||
"## GigE Camera Discovery"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": null,
|
||||
"metadata": {},
|
||||
"outputs": [],
|
||||
"source": [
|
||||
"def discover_gige_cameras():\n",
|
||||
" \"\"\"Attempt to discover GigE cameras on the network\"\"\"\n",
|
||||
" print(\"🔍 Discovering GigE cameras...\")\n",
|
||||
" \n",
|
||||
" # Try different methods to find GigE cameras\n",
|
||||
" methods = [\n",
|
||||
" \"OpenCV with different backends\",\n",
|
||||
" \"Network scanning\",\n",
|
||||
" \"GStreamer pipeline testing\"\n",
|
||||
" ]\n",
|
||||
" \n",
|
||||
" print(\"\\n1. Testing OpenCV backends:\")\n",
|
||||
" backends = [\n",
|
||||
" (cv2.CAP_GSTREAMER, \"GStreamer\"),\n",
|
||||
" (cv2.CAP_V4L2, \"V4L2\"),\n",
|
||||
" (cv2.CAP_FFMPEG, \"FFmpeg\"),\n",
|
||||
" (cv2.CAP_ANY, \"Default\")\n",
|
||||
" ]\n",
|
||||
" \n",
|
||||
" for backend_id, backend_name in backends:\n",
|
||||
" print(f\" Testing {backend_name}...\")\n",
|
||||
" for cam_id in range(5):\n",
|
||||
" try:\n",
|
||||
" cap = cv2.VideoCapture(cam_id, backend_id)\n",
|
||||
" if cap.isOpened():\n",
|
||||
" ret, frame = cap.read()\n",
|
||||
" if ret:\n",
|
||||
" print(f\" ✅ Camera {cam_id} accessible via {backend_name}\")\n",
|
||||
" print(f\" Resolution: {frame.shape[1]}x{frame.shape[0]}\")\n",
|
||||
" cap.release()\n",
|
||||
" except Exception as e:\n",
|
||||
" pass\n",
|
||||
" \n",
|
||||
" print(\"\\n2. Testing GStreamer pipelines:\")\n",
|
||||
" # Common GigE camera GStreamer pipelines\n",
|
||||
" gstreamer_pipelines = [\n",
|
||||
" \"v4l2src device=/dev/video0 ! videoconvert ! appsink\",\n",
|
||||
" \"v4l2src device=/dev/video1 ! videoconvert ! appsink\",\n",
|
||||
" \"tcambin ! videoconvert ! appsink\", # For TIS cameras\n",
|
||||
" \"aravis ! videoconvert ! appsink\", # For Aravis-supported cameras\n",
|
||||
" ]\n",
|
||||
" \n",
|
||||
" for pipeline in gstreamer_pipelines:\n",
|
||||
" try:\n",
|
||||
" print(f\" Testing: {pipeline}\")\n",
|
||||
" cap = cv2.VideoCapture(pipeline, cv2.CAP_GSTREAMER)\n",
|
||||
" if cap.isOpened():\n",
|
||||
" ret, frame = cap.read()\n",
|
||||
" if ret:\n",
|
||||
" print(f\" ✅ Pipeline works! Frame shape: {frame.shape}\")\n",
|
||||
" else:\n",
|
||||
" print(f\" ⚠️ Pipeline opened but no frames\")\n",
|
||||
" else:\n",
|
||||
" print(f\" ❌ Pipeline failed\")\n",
|
||||
" cap.release()\n",
|
||||
" except Exception as e:\n",
|
||||
" print(f\" ❌ Error: {e}\")\n",
|
||||
"\n",
|
||||
"discover_gige_cameras()"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "markdown",
|
||||
"metadata": {},
|
||||
"source": [
|
||||
"## Camera Parameter Configuration"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": null,
|
||||
"metadata": {},
|
||||
"outputs": [],
|
||||
"source": [
|
||||
"def configure_camera_parameters(camera_id, backend=cv2.CAP_ANY):\n",
|
||||
" \"\"\"Configure and test camera parameters\"\"\"\n",
|
||||
" print(f\"⚙️ Configuring camera {camera_id}...\")\n",
|
||||
" \n",
|
||||
" cap = cv2.VideoCapture(camera_id, backend)\n",
|
||||
" if not cap.isOpened():\n",
|
||||
" print(f\"❌ Cannot open camera {camera_id}\")\n",
|
||||
" return None\n",
|
||||
" \n",
|
||||
" # Get current parameters\n",
|
||||
" current_params = {\n",
|
||||
" 'width': cap.get(cv2.CAP_PROP_FRAME_WIDTH),\n",
|
||||
" 'height': cap.get(cv2.CAP_PROP_FRAME_HEIGHT),\n",
|
||||
" 'fps': cap.get(cv2.CAP_PROP_FPS),\n",
|
||||
" 'brightness': cap.get(cv2.CAP_PROP_BRIGHTNESS),\n",
|
||||
" 'contrast': cap.get(cv2.CAP_PROP_CONTRAST),\n",
|
||||
" 'saturation': cap.get(cv2.CAP_PROP_SATURATION),\n",
|
||||
" 'hue': cap.get(cv2.CAP_PROP_HUE),\n",
|
||||
" 'gain': cap.get(cv2.CAP_PROP_GAIN),\n",
|
||||
" 'exposure': cap.get(cv2.CAP_PROP_EXPOSURE),\n",
|
||||
" 'auto_exposure': cap.get(cv2.CAP_PROP_AUTO_EXPOSURE),\n",
|
||||
" 'white_balance': cap.get(cv2.CAP_PROP_WHITE_BALANCE_BLUE_U),\n",
|
||||
" }\n",
|
||||
" \n",
|
||||
" print(\"📊 Current Camera Parameters:\")\n",
|
||||
" for param, value in current_params.items():\n",
|
||||
" print(f\" {param}: {value}\")\n",
|
||||
" \n",
|
||||
" # Test setting some parameters\n",
|
||||
" print(\"\\n🔧 Testing parameter changes:\")\n",
|
||||
" \n",
|
||||
" # Try to set resolution (common GigE resolutions)\n",
|
||||
" test_resolutions = [(1920, 1080), (1280, 720), (640, 480)]\n",
|
||||
" for width, height in test_resolutions:\n",
|
||||
" if cap.set(cv2.CAP_PROP_FRAME_WIDTH, width) and cap.set(cv2.CAP_PROP_FRAME_HEIGHT, height):\n",
|
||||
" actual_width = cap.get(cv2.CAP_PROP_FRAME_WIDTH)\n",
|
||||
" actual_height = cap.get(cv2.CAP_PROP_FRAME_HEIGHT)\n",
|
||||
" print(f\" Resolution {width}x{height}: Set to {actual_width}x{actual_height}\")\n",
|
||||
" break\n",
|
||||
" \n",
|
||||
" # Test FPS settings\n",
|
||||
" for fps in [30, 60, 120]:\n",
|
||||
" if cap.set(cv2.CAP_PROP_FPS, fps):\n",
|
||||
" actual_fps = cap.get(cv2.CAP_PROP_FPS)\n",
|
||||
" print(f\" FPS {fps}: Set to {actual_fps}\")\n",
|
||||
" break\n",
|
||||
" \n",
|
||||
" # Capture test frame with new settings\n",
|
||||
" ret, frame = cap.read()\n",
|
||||
" if ret:\n",
|
||||
" print(f\"\\n✅ Test frame captured: {frame.shape}\")\n",
|
||||
" \n",
|
||||
" # Display frame\n",
|
||||
" plt.figure(figsize=(10, 6))\n",
|
||||
" if len(frame.shape) == 3:\n",
|
||||
" plt.imshow(cv2.cvtColor(frame, cv2.COLOR_BGR2RGB))\n",
|
||||
" else:\n",
|
||||
" plt.imshow(frame, cmap='gray')\n",
|
||||
" plt.title(f\"Camera {camera_id} - Configured\")\n",
|
||||
" plt.axis('off')\n",
|
||||
" plt.show()\n",
|
||||
" \n",
|
||||
" # Save configuration and test image\n",
|
||||
" timestamp = datetime.now().strftime(\"%Y%m%d_%H%M%S\")\n",
|
||||
" \n",
|
||||
" # Save image\n",
|
||||
" img_path = f\"/storage/camera{camera_id}/configured_test_{timestamp}.jpg\"\n",
|
||||
" cv2.imwrite(img_path, frame)\n",
|
||||
" print(f\"💾 Test image saved: {img_path}\")\n",
|
||||
" \n",
|
||||
" # Save configuration\n",
|
||||
" config_path = f\"/storage/camera{camera_id}/config_{timestamp}.json\"\n",
|
||||
" with open(config_path, 'w') as f:\n",
|
||||
" json.dump(current_params, f, indent=2)\n",
|
||||
" print(f\"💾 Configuration saved: {config_path}\")\n",
|
||||
" \n",
|
||||
" cap.release()\n",
|
||||
" return current_params\n",
|
||||
"\n",
|
||||
"# Test configuration (change camera_id as needed)\n",
|
||||
"camera_to_configure = 0\n",
|
||||
"config = configure_camera_parameters(camera_to_configure)"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "markdown",
|
||||
"metadata": {},
|
||||
"source": [
|
||||
"## Dual Camera Testing"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": null,
|
||||
"metadata": {},
|
||||
"outputs": [],
|
||||
"source": [
|
||||
"def test_dual_cameras(camera1_id=0, camera2_id=1, duration=5):\n",
|
||||
" \"\"\"Test simultaneous capture from two cameras\"\"\"\n",
|
||||
" print(f\"📷📷 Testing dual camera capture (cameras {camera1_id} and {camera2_id})...\")\n",
|
||||
" \n",
|
||||
" # Open both cameras\n",
|
||||
" cap1 = cv2.VideoCapture(camera1_id)\n",
|
||||
" cap2 = cv2.VideoCapture(camera2_id)\n",
|
||||
" \n",
|
||||
" if not cap1.isOpened():\n",
|
||||
" print(f\"❌ Cannot open camera {camera1_id}\")\n",
|
||||
" return\n",
|
||||
" \n",
|
||||
" if not cap2.isOpened():\n",
|
||||
" print(f\"❌ Cannot open camera {camera2_id}\")\n",
|
||||
" cap1.release()\n",
|
||||
" return\n",
|
||||
" \n",
|
||||
" print(\"✅ Both cameras opened successfully\")\n",
|
||||
" \n",
|
||||
" # Capture test frames\n",
|
||||
" ret1, frame1 = cap1.read()\n",
|
||||
" ret2, frame2 = cap2.read()\n",
|
||||
" \n",
|
||||
" if ret1 and ret2:\n",
|
||||
" print(f\"📊 Camera {camera1_id}: {frame1.shape}\")\n",
|
||||
" print(f\"📊 Camera {camera2_id}: {frame2.shape}\")\n",
|
||||
" \n",
|
||||
" # Display both frames side by side\n",
|
||||
" fig, (ax1, ax2) = plt.subplots(1, 2, figsize=(15, 6))\n",
|
||||
" \n",
|
||||
" if len(frame1.shape) == 3:\n",
|
||||
" ax1.imshow(cv2.cvtColor(frame1, cv2.COLOR_BGR2RGB))\n",
|
||||
" else:\n",
|
||||
" ax1.imshow(frame1, cmap='gray')\n",
|
||||
" ax1.set_title(f\"Camera {camera1_id}\")\n",
|
||||
" ax1.axis('off')\n",
|
||||
" \n",
|
||||
" if len(frame2.shape) == 3:\n",
|
||||
" ax2.imshow(cv2.cvtColor(frame2, cv2.COLOR_BGR2RGB))\n",
|
||||
" else:\n",
|
||||
" ax2.imshow(frame2, cmap='gray')\n",
|
||||
" ax2.set_title(f\"Camera {camera2_id}\")\n",
|
||||
" ax2.axis('off')\n",
|
||||
" \n",
|
||||
" plt.tight_layout()\n",
|
||||
" plt.show()\n",
|
||||
" \n",
|
||||
" # Save test images\n",
|
||||
" timestamp = datetime.now().strftime(\"%Y%m%d_%H%M%S\")\n",
|
||||
" cv2.imwrite(f\"/storage/camera1/dual_test_{timestamp}.jpg\", frame1)\n",
|
||||
" cv2.imwrite(f\"/storage/camera2/dual_test_{timestamp}.jpg\", frame2)\n",
|
||||
" print(f\"💾 Dual camera test images saved with timestamp {timestamp}\")\n",
|
||||
" \n",
|
||||
" else:\n",
|
||||
" print(\"❌ Failed to capture from one or both cameras\")\n",
|
||||
" \n",
|
||||
" # Test synchronized recording\n",
|
||||
" print(f\"\\n🎥 Testing synchronized recording for {duration} seconds...\")\n",
|
||||
" \n",
|
||||
" # Setup video writers\n",
|
||||
" timestamp = datetime.now().strftime(\"%Y%m%d_%H%M%S\")\n",
|
||||
" \n",
|
||||
" fourcc = cv2.VideoWriter_fourcc(*'mp4v')\n",
|
||||
" fps = 30\n",
|
||||
" \n",
|
||||
" if ret1:\n",
|
||||
" h1, w1 = frame1.shape[:2]\n",
|
||||
" out1 = cv2.VideoWriter(f\"/storage/camera1/sync_test_{timestamp}.mp4\", fourcc, fps, (w1, h1))\n",
|
||||
" \n",
|
||||
" if ret2:\n",
|
||||
" h2, w2 = frame2.shape[:2]\n",
|
||||
" out2 = cv2.VideoWriter(f\"/storage/camera2/sync_test_{timestamp}.mp4\", fourcc, fps, (w2, h2))\n",
|
||||
" \n",
|
||||
" # Record synchronized video\n",
|
||||
" start_time = time.time()\n",
|
||||
" frame_count = 0\n",
|
||||
" \n",
|
||||
" while time.time() - start_time < duration:\n",
|
||||
" ret1, frame1 = cap1.read()\n",
|
||||
" ret2, frame2 = cap2.read()\n",
|
||||
" \n",
|
||||
" if ret1 and ret2:\n",
|
||||
" out1.write(frame1)\n",
|
||||
" out2.write(frame2)\n",
|
||||
" frame_count += 1\n",
|
||||
" else:\n",
|
||||
" print(f\"⚠️ Frame drop at frame {frame_count}\")\n",
|
||||
" \n",
|
||||
" # Cleanup\n",
|
||||
" cap1.release()\n",
|
||||
" cap2.release()\n",
|
||||
" if 'out1' in locals():\n",
|
||||
" out1.release()\n",
|
||||
" if 'out2' in locals():\n",
|
||||
" out2.release()\n",
|
||||
" \n",
|
||||
" elapsed = time.time() - start_time\n",
|
||||
" actual_fps = frame_count / elapsed\n",
|
||||
" \n",
|
||||
" print(f\"✅ Synchronized recording complete\")\n",
|
||||
" print(f\"📊 Recorded {frame_count} frames in {elapsed:.2f}s\")\n",
|
||||
" print(f\"📊 Actual FPS: {actual_fps:.2f}\")\n",
|
||||
" print(f\"💾 Videos saved with timestamp {timestamp}\")\n",
|
||||
"\n",
|
||||
"# Test dual cameras (adjust camera IDs as needed)\n",
|
||||
"test_dual_cameras(0, 1, duration=3)"
|
||||
]
|
||||
}
|
||||
],
|
||||
"metadata": {
|
||||
"kernelspec": {
|
||||
"display_name": "usda-vision-cameras",
|
||||
"language": "python",
|
||||
"name": "usda-vision-cameras"
|
||||
},
|
||||
"language_info": {
|
||||
"codemirror_mode": {
|
||||
"name": "ipython",
|
||||
"version": 3
|
||||
},
|
||||
"file_extension": ".py",
|
||||
"mimetype": "text/x-python",
|
||||
"name": "python",
|
||||
"nbconvert_exporter": "python",
|
||||
"pygments_lexer": "ipython3",
|
||||
"version": "3.11.0"
|
||||
}
|
||||
},
|
||||
"nbformat": 4,
|
||||
"nbformat_minor": 4
|
||||
}
|
||||
6
old tests/main.py
Normal file
6
old tests/main.py
Normal file
@@ -0,0 +1,6 @@
|
||||
def main():
|
||||
print("Hello from usda-vision-cameras!")
|
||||
|
||||
|
||||
if __name__ == "__main__":
|
||||
main()
|
||||
146
old tests/mqtt test.ipynb
Normal file
146
old tests/mqtt test.ipynb
Normal file
@@ -0,0 +1,146 @@
|
||||
{
|
||||
"cells": [
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": 1,
|
||||
"id": "3b92c632",
|
||||
"metadata": {},
|
||||
"outputs": [],
|
||||
"source": [
|
||||
"import paho.mqtt.client as mqtt\n",
|
||||
"import time\n"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": 2,
|
||||
"id": "a6753fb1",
|
||||
"metadata": {},
|
||||
"outputs": [
|
||||
{
|
||||
"name": "stderr",
|
||||
"output_type": "stream",
|
||||
"text": [
|
||||
"/tmp/ipykernel_2342/243927247.py:34: DeprecationWarning: Callback API version 1 is deprecated, update to latest version\n",
|
||||
" client = mqtt.Client(mqtt.CallbackAPIVersion.VERSION1) # Use VERSION1 for broader compatibility\n"
|
||||
]
|
||||
},
|
||||
{
|
||||
"name": "stdout",
|
||||
"output_type": "stream",
|
||||
"text": [
|
||||
"Connecting to MQTT broker at 192.168.1.110:1883...\n",
|
||||
"Successfully connected to MQTT Broker!\n",
|
||||
"Subscribed to topic: 'vision/vibratory_conveyor/state'\n",
|
||||
"Listening for messages... (Press Ctrl+C to stop)\n",
|
||||
"\n",
|
||||
"--- MQTT MESSAGE RECEIVED! ---\n",
|
||||
" Topic: vision/vibratory_conveyor/state\n",
|
||||
" Payload: on\n",
|
||||
" Time: 2025-07-25 21:03:21\n",
|
||||
"------------------------------\n",
|
||||
"\n",
|
||||
"\n",
|
||||
"--- MQTT MESSAGE RECEIVED! ---\n",
|
||||
" Topic: vision/vibratory_conveyor/state\n",
|
||||
" Payload: off\n",
|
||||
" Time: 2025-07-25 21:05:26\n",
|
||||
"------------------------------\n",
|
||||
"\n",
|
||||
"\n",
|
||||
"Stopping MQTT listener.\n"
|
||||
]
|
||||
}
|
||||
],
|
||||
"source": [
|
||||
"\n",
|
||||
"# --- MQTT Broker Configuration ---\n",
|
||||
"# Your Home Assistant's IP address (where your MQTT broker is running)\n",
|
||||
"MQTT_BROKER_HOST = \"192.168.1.110\"\n",
|
||||
"MQTT_BROKER_PORT = 1883\n",
|
||||
"# IMPORTANT: Replace with your actual MQTT broker username and password if you have one set up\n",
|
||||
"# (These are NOT your Home Assistant login credentials, but for the Mosquitto add-on, if used)\n",
|
||||
"# MQTT_BROKER_USERNAME = \"pecan\" # e.g., \"homeassistant_mqtt_user\"\n",
|
||||
"# MQTT_BROKER_PASSWORD = \"whatever\" # e.g., \"SuperSecurePassword123!\"\n",
|
||||
"\n",
|
||||
"# --- Topic to Subscribe To ---\n",
|
||||
"# This MUST exactly match the topic you set in your Home Assistant automation\n",
|
||||
"MQTT_TOPIC = \"vision/vibratory_conveyor/state\" # <<<< Make sure this is correct!\n",
|
||||
"MQTT_TOPIC = \"vision/blower_separator/state\" # <<<< Make sure this is correct!\n",
|
||||
"\n",
|
||||
"# The callback for when the client receives a CONNACK response from the server.\n",
|
||||
"def on_connect(client, userdata, flags, rc):\n",
|
||||
" if rc == 0:\n",
|
||||
" print(\"Successfully connected to MQTT Broker!\")\n",
|
||||
" client.subscribe(MQTT_TOPIC)\n",
|
||||
" print(f\"Subscribed to topic: '{MQTT_TOPIC}'\")\n",
|
||||
" print(\"Listening for messages... (Press Ctrl+C to stop)\")\n",
|
||||
" else:\n",
|
||||
" print(f\"Failed to connect, return code {rc}\\n\")\n",
|
||||
"\n",
|
||||
"# The callback for when a PUBLISH message is received from the server.\n",
|
||||
"def on_message(client, userdata, msg):\n",
|
||||
" received_payload = msg.payload.decode()\n",
|
||||
" print(f\"\\n--- MQTT MESSAGE RECEIVED! ---\")\n",
|
||||
" print(f\" Topic: {msg.topic}\")\n",
|
||||
" print(f\" Payload: {received_payload}\")\n",
|
||||
" print(f\" Time: {time.strftime('%Y-%m-%d %H:%M:%S')}\")\n",
|
||||
" print(f\"------------------------------\\n\")\n",
|
||||
"\n",
|
||||
"# Create an MQTT client instance\n",
|
||||
"client = mqtt.Client(mqtt.CallbackAPIVersion.VERSION1) # Use VERSION1 for broader compatibility\n",
|
||||
"\n",
|
||||
"# Set callback functions\n",
|
||||
"client.on_connect = on_connect\n",
|
||||
"client.on_message = on_message\n",
|
||||
"\n",
|
||||
"# Set username and password if required\n",
|
||||
"# (Only uncomment and fill these if your MQTT broker requires authentication)\n",
|
||||
"# client.username_pw_set(MQTT_BROKER_USERNAME, MQTT_BROKER_PASSWORD)\n",
|
||||
"\n",
|
||||
"try:\n",
|
||||
" # Attempt to connect to the MQTT broker\n",
|
||||
" print(f\"Connecting to MQTT broker at {MQTT_BROKER_HOST}:{MQTT_BROKER_PORT}...\")\n",
|
||||
" client.connect(MQTT_BROKER_HOST, MQTT_BROKER_PORT, 60)\n",
|
||||
"\n",
|
||||
" # Start the MQTT loop. This runs in the background and processes messages.\n",
|
||||
" client.loop_forever()\n",
|
||||
"\n",
|
||||
"except KeyboardInterrupt:\n",
|
||||
" print(\"\\nStopping MQTT listener.\")\n",
|
||||
" client.disconnect() # Disconnect gracefully\n",
|
||||
"except Exception as e:\n",
|
||||
" print(f\"An unexpected error occurred: {e}\")"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": null,
|
||||
"id": "56531671",
|
||||
"metadata": {},
|
||||
"outputs": [],
|
||||
"source": []
|
||||
}
|
||||
],
|
||||
"metadata": {
|
||||
"kernelspec": {
|
||||
"display_name": "USDA-vision-cameras",
|
||||
"language": "python",
|
||||
"name": "python3"
|
||||
},
|
||||
"language_info": {
|
||||
"codemirror_mode": {
|
||||
"name": "ipython",
|
||||
"version": 3
|
||||
},
|
||||
"file_extension": ".py",
|
||||
"mimetype": "text/x-python",
|
||||
"name": "python",
|
||||
"nbconvert_exporter": "python",
|
||||
"pygments_lexer": "ipython3",
|
||||
"version": "3.11.2"
|
||||
}
|
||||
},
|
||||
"nbformat": 4,
|
||||
"nbformat_minor": 5
|
||||
}
|
||||
197
old tests/test_exposure.py
Normal file
197
old tests/test_exposure.py
Normal file
@@ -0,0 +1,197 @@
|
||||
#coding=utf-8
|
||||
"""
|
||||
Test script to help find optimal exposure settings for your GigE camera.
|
||||
This script captures a single test image with different exposure settings.
|
||||
"""
|
||||
import os
|
||||
import sys
|
||||
import mvsdk
|
||||
import numpy as np
|
||||
import cv2
|
||||
import platform
|
||||
from datetime import datetime
|
||||
|
||||
# Add the python demo directory to path
|
||||
sys.path.append('./python demo')
|
||||
|
||||
def test_exposure_settings():
|
||||
"""
|
||||
Test different exposure settings to find optimal values
|
||||
"""
|
||||
# Initialize SDK
|
||||
try:
|
||||
mvsdk.CameraSdkInit(1)
|
||||
print("SDK initialized successfully")
|
||||
except Exception as e:
|
||||
print(f"SDK initialization failed: {e}")
|
||||
return False
|
||||
|
||||
# Enumerate cameras
|
||||
DevList = mvsdk.CameraEnumerateDevice()
|
||||
nDev = len(DevList)
|
||||
|
||||
if nDev < 1:
|
||||
print("No camera was found!")
|
||||
return False
|
||||
|
||||
print(f"Found {nDev} camera(s):")
|
||||
for i, DevInfo in enumerate(DevList):
|
||||
print(f" {i}: {DevInfo.GetFriendlyName()} ({DevInfo.GetPortType()})")
|
||||
|
||||
# Use first camera
|
||||
DevInfo = DevList[0]
|
||||
print(f"\nSelected camera: {DevInfo.GetFriendlyName()}")
|
||||
|
||||
# Initialize camera
|
||||
try:
|
||||
hCamera = mvsdk.CameraInit(DevInfo, -1, -1)
|
||||
print("Camera initialized successfully")
|
||||
except mvsdk.CameraException as e:
|
||||
print(f"CameraInit Failed({e.error_code}): {e.message}")
|
||||
return False
|
||||
|
||||
try:
|
||||
# Get camera capabilities
|
||||
cap = mvsdk.CameraGetCapability(hCamera)
|
||||
monoCamera = (cap.sIspCapacity.bMonoSensor != 0)
|
||||
print(f"Camera type: {'Monochrome' if monoCamera else 'Color'}")
|
||||
|
||||
# Get camera ranges
|
||||
try:
|
||||
exp_min, exp_max, exp_step = mvsdk.CameraGetExposureTimeRange(hCamera)
|
||||
print(f"Exposure time range: {exp_min:.1f} - {exp_max:.1f} μs")
|
||||
|
||||
gain_min, gain_max, gain_step = mvsdk.CameraGetAnalogGainXRange(hCamera)
|
||||
print(f"Analog gain range: {gain_min:.2f} - {gain_max:.2f}x")
|
||||
except Exception as e:
|
||||
print(f"Could not get camera ranges: {e}")
|
||||
exp_min, exp_max = 100, 100000
|
||||
gain_min, gain_max = 1.0, 4.0
|
||||
|
||||
# Set output format
|
||||
if monoCamera:
|
||||
mvsdk.CameraSetIspOutFormat(hCamera, mvsdk.CAMERA_MEDIA_TYPE_MONO8)
|
||||
else:
|
||||
mvsdk.CameraSetIspOutFormat(hCamera, mvsdk.CAMERA_MEDIA_TYPE_BGR8)
|
||||
|
||||
# Set camera to continuous capture mode
|
||||
mvsdk.CameraSetTriggerMode(hCamera, 0)
|
||||
mvsdk.CameraSetAeState(hCamera, 0) # Disable auto exposure
|
||||
|
||||
# Start camera
|
||||
mvsdk.CameraPlay(hCamera)
|
||||
|
||||
# Allocate frame buffer
|
||||
FrameBufferSize = cap.sResolutionRange.iWidthMax * cap.sResolutionRange.iHeightMax * (1 if monoCamera else 3)
|
||||
pFrameBuffer = mvsdk.CameraAlignMalloc(FrameBufferSize, 16)
|
||||
|
||||
# Create test directory
|
||||
if not os.path.exists("exposure_tests"):
|
||||
os.makedirs("exposure_tests")
|
||||
|
||||
print("\nTesting different exposure settings...")
|
||||
print("=" * 50)
|
||||
|
||||
# Test different exposure times (in microseconds)
|
||||
exposure_times = [500, 1000, 2000, 5000, 10000, 20000] # 0.5ms to 20ms
|
||||
analog_gains = [1.0] # Start with 1x gain
|
||||
|
||||
test_count = 0
|
||||
for exp_time in exposure_times:
|
||||
for gain in analog_gains:
|
||||
# Clamp values to valid ranges
|
||||
exp_time = max(exp_min, min(exp_max, exp_time))
|
||||
gain = max(gain_min, min(gain_max, gain))
|
||||
|
||||
print(f"\nTest {test_count + 1}: Exposure={exp_time/1000:.1f}ms, Gain={gain:.1f}x")
|
||||
|
||||
# Set camera parameters
|
||||
mvsdk.CameraSetExposureTime(hCamera, exp_time)
|
||||
try:
|
||||
mvsdk.CameraSetAnalogGainX(hCamera, gain)
|
||||
except:
|
||||
pass # Some cameras might not support this
|
||||
|
||||
# Wait a moment for settings to take effect
|
||||
import time
|
||||
time.sleep(0.1)
|
||||
|
||||
# Capture image
|
||||
try:
|
||||
pRawData, FrameHead = mvsdk.CameraGetImageBuffer(hCamera, 2000)
|
||||
mvsdk.CameraImageProcess(hCamera, pRawData, pFrameBuffer, FrameHead)
|
||||
mvsdk.CameraReleaseImageBuffer(hCamera, pRawData)
|
||||
|
||||
# Handle Windows image flip
|
||||
if platform.system() == "Windows":
|
||||
mvsdk.CameraFlipFrameBuffer(pFrameBuffer, FrameHead, 1)
|
||||
|
||||
# Convert to numpy array
|
||||
frame_data = (mvsdk.c_ubyte * FrameHead.uBytes).from_address(pFrameBuffer)
|
||||
frame = np.frombuffer(frame_data, dtype=np.uint8)
|
||||
|
||||
if FrameHead.uiMediaType == mvsdk.CAMERA_MEDIA_TYPE_MONO8:
|
||||
frame = frame.reshape((FrameHead.iHeight, FrameHead.iWidth))
|
||||
else:
|
||||
frame = frame.reshape((FrameHead.iHeight, FrameHead.iWidth, 3))
|
||||
|
||||
# Calculate image statistics
|
||||
mean_brightness = np.mean(frame)
|
||||
max_brightness = np.max(frame)
|
||||
|
||||
# Save image
|
||||
filename = f"exposure_tests/test_{test_count+1:02d}_exp{exp_time/1000:.1f}ms_gain{gain:.1f}x.jpg"
|
||||
cv2.imwrite(filename, frame)
|
||||
|
||||
# Provide feedback
|
||||
status = ""
|
||||
if mean_brightness < 50:
|
||||
status = "TOO DARK"
|
||||
elif mean_brightness > 200:
|
||||
status = "TOO BRIGHT"
|
||||
elif max_brightness >= 255:
|
||||
status = "OVEREXPOSED"
|
||||
else:
|
||||
status = "GOOD"
|
||||
|
||||
print(f" → Saved: {filename}")
|
||||
print(f" → Brightness: mean={mean_brightness:.1f}, max={max_brightness:.1f} [{status}]")
|
||||
|
||||
test_count += 1
|
||||
|
||||
except mvsdk.CameraException as e:
|
||||
print(f" → Failed to capture: {e.message}")
|
||||
|
||||
print(f"\nCompleted {test_count} test captures!")
|
||||
print("Check the 'exposure_tests' directory to see the results.")
|
||||
print("\nRecommendations:")
|
||||
print("- Look for images marked as 'GOOD' - these have optimal exposure")
|
||||
print("- If all images are 'TOO BRIGHT', try lower exposure times or gains")
|
||||
print("- If all images are 'TOO DARK', try higher exposure times or gains")
|
||||
print("- Avoid 'OVEREXPOSED' images as they have clipped highlights")
|
||||
|
||||
# Cleanup
|
||||
mvsdk.CameraAlignFree(pFrameBuffer)
|
||||
|
||||
finally:
|
||||
# Close camera
|
||||
mvsdk.CameraUnInit(hCamera)
|
||||
print("\nCamera closed")
|
||||
|
||||
return True
|
||||
|
||||
if __name__ == "__main__":
|
||||
print("GigE Camera Exposure Test Script")
|
||||
print("=" * 40)
|
||||
print("This script will test different exposure settings and save sample images.")
|
||||
print("Use this to find the optimal settings for your lighting conditions.")
|
||||
print()
|
||||
|
||||
success = test_exposure_settings()
|
||||
|
||||
if success:
|
||||
print("\nTesting completed successfully!")
|
||||
else:
|
||||
print("\nTesting failed!")
|
||||
|
||||
input("Press Enter to exit...")
|
||||
Reference in New Issue
Block a user