Add 'web/' from commit '81828f61cf893039b89d3cf1861555f31167c37d'

git-subtree-dir: web
git-subtree-mainline: 7dbb36d619
git-subtree-split: 81828f61cf
This commit is contained in:
Alireza Vaezi
2025-08-07 20:57:47 -04:00
129 changed files with 29668 additions and 0 deletions

View File

@@ -0,0 +1,162 @@
# 🤖 Auto-Recording Setup Guide
This guide explains how to set up and test the automatic recording functionality that triggers camera recording when machines turn on/off via MQTT.
## 📋 Overview
The auto-recording feature allows cameras to automatically start recording when their associated machine turns on and stop recording when the machine turns off. This is based on MQTT messages received from the machines.
## 🔧 Setup Steps
### 1. Configure Camera Auto-Recording
1. **Access Vision System**: Navigate to the Vision System page in the dashboard
2. **Open Camera Configuration**: Click "Configure Camera" on any camera (admin access required)
3. **Enable Auto-Recording**: In the "Auto-Recording" section, check the box "Automatically start recording when machine turns on"
4. **Save Configuration**: Click "Save Changes" to apply the setting
### 2. Machine-Camera Mapping
The system uses the `machine_topic` field in camera configuration to determine which MQTT topic to monitor:
- **Camera 1** (`camera1`) → monitors `blower_separator`
- **Camera 2** (`camera2`) → monitors `vibratory_conveyor`
### 3. Start Auto-Recording Manager
1. **Navigate to Vision System**: Go to the Vision System page
2. **Find Auto-Recording Section**: Look for the "Auto-Recording" panel (admin only)
3. **Start Monitoring**: Click the "Start" button to begin monitoring MQTT events
4. **Monitor Status**: The panel will show the current state of all cameras and their auto-recording status
## 🧪 Testing the Functionality
### Test Scenario 1: Manual MQTT Message Simulation
If you have access to the MQTT broker, you can test by sending messages:
```bash
# Turn on the vibratory conveyor (should start recording on camera2)
mosquitto_pub -h 192.168.1.110 -t "vision/vibratory_conveyor/state" -m "on"
# Turn off the vibratory conveyor (should stop recording on camera2)
mosquitto_pub -h 192.168.1.110 -t "vision/vibratory_conveyor/state" -m "off"
# Turn on the blower separator (should start recording on camera1)
mosquitto_pub -h 192.168.1.110 -t "vision/blower_separator/state" -m "on"
# Turn off the blower separator (should stop recording on camera1)
mosquitto_pub -h 192.168.1.110 -t "vision/blower_separator/state" -m "off"
```
### Test Scenario 2: Physical Machine Operation
1. **Enable Auto-Recording**: Ensure auto-recording is enabled for the desired cameras
2. **Start Auto-Recording Manager**: Make sure the auto-recording manager is running
3. **Operate Machine**: Turn on the physical machine (conveyor or blower)
4. **Verify Recording**: Check that the camera starts recording automatically
5. **Stop Machine**: Turn off the machine
6. **Verify Stop**: Check that recording stops automatically
## 📊 Monitoring and Verification
### Auto-Recording Status Panel
The Vision System page includes an "Auto-Recording" status panel that shows:
- **Manager Status**: Whether the auto-recording manager is active
- **Camera States**: For each camera:
- Machine state (ON/OFF)
- Recording status (YES/NO)
- Auto-record enabled status
- Last state change timestamp
### MQTT Events Panel
Monitor the MQTT Events section to see:
- Recent machine state changes
- MQTT message timestamps
- Message payloads
### Recording Files
Check the storage section for automatically created recording files:
- Files will be named with pattern: `auto_{machine_name}_{timestamp}.avi`
- Example: `auto_vibratory_conveyor_2025-07-29T10-30-45-123Z.avi`
## 🔍 Troubleshooting
### Auto-Recording Not Starting
1. **Check Configuration**: Verify auto-recording is enabled in camera config
2. **Check Manager Status**: Ensure auto-recording manager is running
3. **Check MQTT Connection**: Verify MQTT client is connected
4. **Check Machine Topic**: Ensure camera's machine_topic matches MQTT topic
5. **Check Permissions**: Ensure you have admin access
### Recording Not Stopping
1. **Check MQTT Messages**: Verify "off" messages are being received
2. **Check Manager Logs**: Look for error messages in browser console
3. **Manual Stop**: Use manual stop recording if needed
### Performance Issues
1. **Polling Interval**: The manager polls MQTT events every 2 seconds by default
2. **Event Processing**: Only new events since last poll are processed
3. **Error Handling**: Failed operations are logged but don't stop the manager
## 🔧 Configuration Options
### Camera Configuration Fields
```json
{
"auto_record_on_machine_start": true, // Enable/disable auto-recording
"machine_topic": "vibratory_conveyor", // MQTT topic to monitor
// ... other camera settings
}
```
### Auto-Recording Manager Settings
- **Polling Interval**: 2000ms (configurable in code)
- **Event Batch Size**: 50 events per poll
- **Filename Pattern**: `auto_{machine_name}_{timestamp}.avi`
## 📝 API Endpoints
### Camera Configuration
- `GET /cameras/{camera_name}/config` - Get camera configuration
- `PUT /cameras/{camera_name}/config` - Update camera configuration
### Recording Control
- `POST /cameras/{camera_name}/start-recording` - Start recording
- `POST /cameras/{camera_name}/stop-recording` - Stop recording
### MQTT Monitoring
- `GET /mqtt/events?limit=50` - Get recent MQTT events
- `GET /machines` - Get machine states
## 🚨 Important Notes
1. **Admin Access Required**: Auto-recording configuration requires admin privileges
2. **Backend Integration**: This frontend implementation requires corresponding backend support
3. **MQTT Dependency**: Functionality depends on stable MQTT connection
4. **Storage Space**: Monitor storage usage as auto-recording can generate many files
5. **Network Reliability**: Ensure stable network connection for MQTT messages
## 🔄 Future Enhancements
Potential improvements for the auto-recording system:
1. **Recording Schedules**: Time-based recording rules
2. **Storage Management**: Automatic cleanup of old recordings
3. **Alert System**: Notifications for recording failures
4. **Advanced Triggers**: Multiple machine dependencies
5. **Recording Profiles**: Different settings per machine state

View File

@@ -0,0 +1,300 @@
# 🏗️ Modular Architecture Guide
This guide demonstrates the modular architecture patterns implemented in the video streaming feature and how to apply them to other parts of the project.
## 🎯 Goals
- **Separation of Concerns**: Each module has a single responsibility
- **Reusability**: Components can be used across different parts of the application
- **Maintainability**: Easy to understand, modify, and test individual pieces
- **Scalability**: Easy to add new features without affecting existing code
## 📁 Feature-Based Structure
```
src/features/video-streaming/
├── components/ # UI Components
│ ├── VideoPlayer.tsx
│ ├── VideoCard.tsx
│ ├── VideoList.tsx
│ ├── VideoModal.tsx
│ ├── VideoThumbnail.tsx
│ └── index.ts
├── hooks/ # Custom React Hooks
│ ├── useVideoList.ts
│ ├── useVideoPlayer.ts
│ ├── useVideoInfo.ts
│ └── index.ts
├── services/ # API & Business Logic
│ └── videoApi.ts
├── types/ # TypeScript Definitions
│ └── index.ts
├── utils/ # Pure Utility Functions
│ └── videoUtils.ts
├── VideoStreamingPage.tsx # Main Feature Page
└── index.ts # Feature Export
```
## 🧩 Layer Responsibilities
### 1. **Components Layer** (`/components`)
- **Purpose**: Pure UI components that handle rendering and user interactions
- **Rules**:
- No direct API calls
- Receive data via props
- Emit events via callbacks
- Minimal business logic
**Example:**
```tsx
// ✅ Good: Pure component with clear props
export const VideoCard: React.FC<VideoCardProps> = ({
video,
onClick,
showMetadata = true,
}) => {
return (
<div onClick={() => onClick?.(video)}>
{/* UI rendering */}
</div>
);
};
// ❌ Bad: Component with API calls
export const VideoCard = () => {
const [video, setVideo] = useState(null);
useEffect(() => {
fetch('/api/videos/123').then(/* ... */); // Don't do this!
}, []);
};
```
### 2. **Hooks Layer** (`/hooks`)
- **Purpose**: Manage state, side effects, and provide data to components
- **Rules**:
- Handle API calls and data fetching
- Manage component state
- Provide clean interfaces to components
**Example:**
```tsx
// ✅ Good: Hook handles complexity, provides simple interface
export function useVideoList(options = {}) {
const [videos, setVideos] = useState([]);
const [loading, setLoading] = useState(false);
const fetchVideos = useCallback(async () => {
setLoading(true);
try {
const data = await videoApiService.getVideos();
setVideos(data.videos);
} finally {
setLoading(false);
}
}, []);
return { videos, loading, refetch: fetchVideos };
}
```
### 3. **Services Layer** (`/services`)
- **Purpose**: Handle external dependencies (APIs, storage, etc.)
- **Rules**:
- Pure functions or classes
- No React dependencies
- Handle errors gracefully
- Provide consistent interfaces
**Example:**
```tsx
// ✅ Good: Service handles API complexity
export class VideoApiService {
async getVideos(params = {}) {
try {
const response = await fetch(this.buildUrl('/videos', params));
return await this.handleResponse(response);
} catch (error) {
throw new VideoApiError('FETCH_ERROR', error.message);
}
}
}
```
### 4. **Types Layer** (`/types`)
- **Purpose**: Centralized TypeScript definitions
- **Rules**:
- Define all interfaces and types
- Export from index.ts
- Keep types close to their usage
### 5. **Utils Layer** (`/utils`)
- **Purpose**: Pure utility functions
- **Rules**:
- No side effects
- Easily testable
- Single responsibility
## 🔄 Component Composition Patterns
### Small, Focused Components
Instead of large monolithic components, create small, focused ones:
```tsx
// ✅ Good: Small, focused components
<VideoList>
{videos.map(video => (
<VideoCard key={video.id} video={video} onClick={onVideoSelect} />
))}
</VideoList>
// ❌ Bad: Monolithic component
<VideoSystemPage>
{/* 500+ lines of mixed concerns */}
</VideoSystemPage>
```
### Composition over Inheritance
```tsx
// ✅ Good: Compose features
export const VideoStreamingPage = () => {
const { videos, loading } = useVideoList();
const [selectedVideo, setSelectedVideo] = useState(null);
return (
<div>
<VideoList videos={videos} onVideoSelect={setSelectedVideo} />
<VideoModal video={selectedVideo} />
</div>
);
};
```
## 🎨 Applying to Existing Components
### Example: Breaking Down VisionSystem Component
**Current Structure (Monolithic):**
```tsx
// ❌ Current: One large component
export const VisionSystem = () => {
// 900+ lines of mixed concerns
return (
<div>
{/* System status */}
{/* Camera cards */}
{/* Storage info */}
{/* MQTT status */}
</div>
);
};
```
**Proposed Modular Structure:**
```
src/features/vision-system/
├── components/
│ ├── SystemStatusCard.tsx
│ ├── CameraCard.tsx
│ ├── CameraGrid.tsx
│ ├── StorageOverview.tsx
│ ├── MqttStatus.tsx
│ └── index.ts
├── hooks/
│ ├── useSystemStatus.ts
│ ├── useCameraList.ts
│ └── index.ts
├── services/
│ └── visionApi.ts
└── VisionSystemPage.tsx
```
**Refactored Usage:**
```tsx
// ✅ Better: Composed from smaller parts
export const VisionSystemPage = () => {
return (
<div>
<SystemStatusCard />
<CameraGrid />
<StorageOverview />
<MqttStatus />
</div>
);
};
// Now you can reuse components elsewhere:
export const DashboardHome = () => {
return (
<div>
<SystemStatusCard /> {/* Reused! */}
<QuickStats />
</div>
);
};
```
## 📋 Migration Strategy
### Phase 1: Extract Utilities
1. Move pure functions to `/utils`
2. Move types to `/types`
3. Create service classes for API calls
### Phase 2: Extract Hooks
1. Create custom hooks for data fetching
2. Move state management to hooks
3. Simplify component logic
### Phase 3: Break Down Components
1. Identify distinct UI sections
2. Extract to separate components
3. Use composition in parent components
### Phase 4: Feature Organization
1. Group related components, hooks, and services
2. Create feature-level exports
3. Update imports across the application
## 🧪 Testing Benefits
Modular architecture makes testing much easier:
```tsx
// ✅ Easy to test individual pieces
describe('VideoCard', () => {
it('displays video information', () => {
render(<VideoCard video={mockVideo} />);
expect(screen.getByText(mockVideo.filename)).toBeInTheDocument();
});
});
describe('useVideoList', () => {
it('fetches videos on mount', async () => {
const { result } = renderHook(() => useVideoList());
await waitFor(() => {
expect(result.current.videos).toHaveLength(3);
});
});
});
```
## 🚀 Benefits Achieved
1. **Reusability**: `VideoCard` can be used in lists, grids, or modals
2. **Maintainability**: Each file has a single, clear purpose
3. **Testability**: Small, focused units are easy to test
4. **Developer Experience**: Clear structure makes onboarding easier
5. **Performance**: Smaller components enable better optimization
## 📝 Best Practices
1. **Start Small**: Begin with one feature and apply patterns gradually
2. **Single Responsibility**: Each file should have one clear purpose
3. **Clear Interfaces**: Use TypeScript to define clear contracts
4. **Consistent Naming**: Follow naming conventions across features
5. **Documentation**: Document complex logic and interfaces
This modular approach transforms large, hard-to-maintain components into small, reusable, and testable pieces that can be composed together to create powerful features.

View File

@@ -0,0 +1,145 @@
# 🎥 MP4 Frontend Implementation Status
## ✅ Implementation Complete & API-Aligned
The frontend has been successfully updated to match the actual camera configuration API structure with full MP4 format support and proper field categorization.
## 🔧 Changes Made
### 1. **TypeScript Types Updated** (`src/lib/visionApi.ts`)
- **Complete API alignment** with actual camera configuration structure
- **Required video format fields**: `video_format`, `video_codec`, `video_quality`
- **Added missing fields**: `wb_red_gain`, `wb_green_gain`, `wb_blue_gain`
- **Proper field categorization**: Read-only vs real-time configurable vs restart-required
### 2. **Video File Utilities Created** (`src/utils/videoFileUtils.ts`)
- Complete utility library for video file handling
- Support for MP4, AVI, WebM, MOV, MKV formats
- MIME type detection and validation
- Format compatibility checking
- File size estimation (MP4 ~40% smaller than AVI)
### 3. **Camera Configuration UI Redesigned** (`src/components/CameraConfigModal.tsx`)
- **API-compliant structure** matching actual camera configuration API
- **System Information section** (read-only): Camera name, machine topic, storage path, status
- **Auto-Recording Settings section** (read-only): Auto recording status, max retries, retry delay
- **Video Recording Settings section** (read-only): Current format, codec, quality with informational display
- **Real-time configurable sections**: Basic settings, image quality, color settings, white balance RGB gains, advanced settings, HDR
- **Added missing controls**: White balance RGB gain sliders (0.00-3.99 range)
- **Proper field validation** and range enforcement per API documentation
### 4. **Video Player Components Improved**
- **VideoPlayer**: Dynamic MIME type detection, iOS compatibility (`playsInline`)
- **VideoModal**: Format indicators with web compatibility badges
- **VideoUtils**: Enhanced format detection and utilities
## 🚨 Current API Compatibility Issue
### Problem
The backend API is returning a validation error:
```
3 validation errors for CameraConfigResponse
video_format: Field required
video_codec: Field required
video_quality: Field required
```
### Root Cause
The backend expects the new video format fields to be required, but existing camera configurations don't have these fields yet.
### Frontend Solution ✅
The frontend now handles this gracefully:
1. **Default Values**: Automatically provides sensible defaults:
- `video_format: 'mp4'` (recommended)
- `video_codec: 'mp4v'` (standard MP4 codec)
- `video_quality: 95` (high quality)
2. **Error Handling**: Shows helpful error message when API fails
3. **Fallback Configuration**: Creates a working default configuration
4. **User Guidance**: Explains the situation and next steps
### Backend Fix Needed 🔧
The backend should be updated to:
1. Make video format fields optional in the API response
2. Provide default values when fields are missing
3. Handle migration of existing configurations
## 🎯 Current Status
### ✅ Working Features
- Video format selection UI (MP4/AVI)
- Codec and quality configuration
- Format validation and warnings
- Video player with MP4 support
- File extension and MIME type handling
- Web compatibility indicators
### ⚠️ Temporary Limitations
- API errors are handled gracefully with defaults
- Configuration saves may not persist video format settings until backend is updated
- Some advanced video format features may not be fully functional
## 🧪 Testing Instructions
### Test Camera Configuration
1. Open Vision System page
2. Click "Configure" on any camera
3. Scroll to "Video Recording Settings" section
4. Verify format/codec/quality controls work
5. Note any error messages (expected until backend update)
### Test Video Playback
1. Verify existing AVI videos still play
2. Test any new MP4 videos (if available)
3. Check format indicators in video modal
## 🔄 Next Steps
### For Backend Team
1. Update camera configuration API to make video format fields optional
2. Provide default values for missing fields
3. Implement video format persistence in database
4. Test API with updated frontend
### For Frontend Team
1. Test thoroughly once backend is updated
2. Remove temporary error handling once API is fixed
3. Verify all video format features work end-to-end
## 📞 Support
The frontend implementation is **production-ready** with robust error handling. Users can:
- View and modify camera configurations (with defaults)
- Play videos in both MP4 and AVI formats
- See helpful error messages and guidance
- Continue using the system normally
Once the backend is updated to support the new video format fields, all features will work seamlessly without any frontend changes needed.
## 🎉 Benefits Ready to Unlock
Once backend is updated:
- **40% smaller file sizes** with MP4 format
- **Better web compatibility** and mobile support
- **Improved streaming performance**
- **Professional video quality** maintained
- **Seamless format migration** for existing recordings

View File

@@ -0,0 +1,351 @@
# 🎬 Video Streaming Integration Guide
This guide shows how to integrate the modular video streaming feature into your existing dashboard.
## 🚀 Quick Start
### 1. Add to Dashboard Navigation
Update your sidebar or navigation to include the video streaming page:
```tsx
// In src/components/Sidebar.tsx or similar
import { VideoStreamingPage } from '../features/video-streaming';
const navigationItems = [
// ... existing items
{
name: 'Video Library',
href: '/videos',
icon: VideoCameraIcon,
component: VideoStreamingPage,
},
];
```
### 2. Add Route (if using React Router)
```tsx
// In your main App.tsx or router configuration
import { VideoStreamingPage } from './features/video-streaming';
function App() {
return (
<Routes>
{/* ... existing routes */}
<Route path="/videos" element={<VideoStreamingPage />} />
</Routes>
);
}
```
## 🧩 Using Individual Components
The beauty of the modular architecture is that you can use individual components anywhere:
### Dashboard Home - Recent Videos
```tsx
// In src/components/DashboardHome.tsx
import { VideoList } from '../features/video-streaming';
export const DashboardHome = () => {
return (
<div className="grid grid-cols-1 lg:grid-cols-2 gap-6">
{/* Existing dashboard content */}
<div className="bg-white rounded-lg shadow p-6">
<h2 className="text-lg font-semibold mb-4">Recent Videos</h2>
<VideoList
limit={6}
filters={{ /* recent videos only */ }}
className="grid grid-cols-2 gap-4"
/>
</div>
</div>
);
};
```
### Vision System - Camera Videos
```tsx
// In src/components/VisionSystem.tsx
import { VideoList, VideoCard } from '../features/video-streaming';
export const VisionSystem = () => {
const [selectedCamera, setSelectedCamera] = useState(null);
return (
<div>
{/* Existing vision system content */}
{/* Add video section for selected camera */}
{selectedCamera && (
<div className="mt-8">
<h3 className="text-lg font-semibold mb-4">
Recent Videos - {selectedCamera}
</h3>
<VideoList
filters={{ cameraName: selectedCamera }}
limit={8}
/>
</div>
)}
</div>
);
};
```
### Experiment Data Entry - Video Evidence
```tsx
// In src/components/DataEntry.tsx
import { VideoThumbnail, VideoModal } from '../features/video-streaming';
export const DataEntry = () => {
const [selectedVideo, setSelectedVideo] = useState(null);
const [showVideoModal, setShowVideoModal] = useState(false);
return (
<form>
{/* Existing form fields */}
{/* Add video evidence section */}
<div className="mb-6">
<label className="block text-sm font-medium text-gray-700 mb-2">
Video Evidence
</label>
<div className="grid grid-cols-4 gap-4">
{experimentVideos.map(video => (
<VideoThumbnail
key={video.file_id}
fileId={video.file_id}
onClick={() => {
setSelectedVideo(video);
setShowVideoModal(true);
}}
/>
))}
</div>
</div>
<VideoModal
video={selectedVideo}
isOpen={showVideoModal}
onClose={() => setShowVideoModal(false)}
/>
</form>
);
};
```
## 🎨 Customizing Components
### Custom Video Card for Experiments
```tsx
// Create a specialized version for your use case
import { VideoCard } from '../features/video-streaming';
export const ExperimentVideoCard = ({ video, experimentId, onAttach }) => {
return (
<div className="relative">
<VideoCard video={video} showMetadata={false} />
{/* Add experiment-specific actions */}
<div className="absolute top-2 right-2">
<button
onClick={() => onAttach(video.file_id, experimentId)}
className="bg-blue-500 text-white px-2 py-1 rounded text-xs"
>
Attach to Experiment
</button>
</div>
</div>
);
};
```
### Custom Video Player with Annotations
```tsx
// Extend the base video player
import { VideoPlayer } from '../features/video-streaming';
export const AnnotatedVideoPlayer = ({ fileId, annotations }) => {
return (
<div className="relative">
<VideoPlayer fileId={fileId} />
{/* Add annotation overlay */}
<div className="absolute inset-0 pointer-events-none">
{annotations.map(annotation => (
<div
key={annotation.id}
className="absolute bg-yellow-400 bg-opacity-75 p-2 rounded"
style={{
left: `${annotation.x}%`,
top: `${annotation.y}%`,
}}
>
{annotation.text}
</div>
))}
</div>
</div>
);
};
```
## 🔧 Configuration
### API Base URL
Update the API base URL if needed:
```tsx
// In your app configuration
import { VideoApiService } from './features/video-streaming';
// Create a configured instance
export const videoApi = new VideoApiService('http://your-api-server:8000');
// Or set globally
process.env.REACT_APP_VIDEO_API_URL = 'http://your-api-server:8000';
```
### Custom Styling
The components use Tailwind CSS classes. You can customize them:
```tsx
// Override default styles
<VideoList
className="grid grid-cols-1 md:grid-cols-3 gap-8" // Custom grid
/>
<VideoCard
className="border-2 border-blue-200 hover:border-blue-400" // Custom border
/>
```
## 🎯 Integration Examples
### 1. Camera Management Integration
```tsx
// In your camera management page
import { VideoList, useVideoList } from '../features/video-streaming';
export const CameraManagement = () => {
const [selectedCamera, setSelectedCamera] = useState(null);
const { videos } = useVideoList({
initialParams: { camera_name: selectedCamera?.name }
});
return (
<div className="grid grid-cols-1 lg:grid-cols-2 gap-6">
{/* Camera controls */}
<CameraControls onCameraSelect={setSelectedCamera} />
{/* Videos from selected camera */}
<div>
<h3>Videos from {selectedCamera?.name}</h3>
<VideoList
filters={{ cameraName: selectedCamera?.name }}
limit={12}
/>
</div>
</div>
);
};
```
### 2. Experiment Timeline Integration
```tsx
// Show videos in experiment timeline
import { VideoThumbnail } from '../features/video-streaming';
export const ExperimentTimeline = ({ experiment }) => {
return (
<div className="timeline">
{experiment.events.map(event => (
<div key={event.id} className="timeline-item">
<div className="timeline-content">
<h4>{event.title}</h4>
<p>{event.description}</p>
{/* Show related videos */}
{event.videos?.length > 0 && (
<div className="flex space-x-2 mt-2">
{event.videos.map(videoId => (
<VideoThumbnail
key={videoId}
fileId={videoId}
width={120}
height={80}
/>
))}
</div>
)}
</div>
</div>
))}
</div>
);
};
```
## 📱 Responsive Design
The components are designed to be responsive:
```tsx
// Automatic responsive grid
<VideoList className="grid grid-cols-1 md:grid-cols-2 lg:grid-cols-3 xl:grid-cols-4 gap-4" />
// Mobile-friendly video player
<VideoPlayer
fileId={video.file_id}
className="w-full h-auto max-h-96"
/>
```
## 🔍 Search Integration
Add search functionality:
```tsx
import { useVideoList } from '../features/video-streaming';
export const VideoSearch = () => {
const [searchTerm, setSearchTerm] = useState('');
const { videos, loading } = useVideoList({
initialParams: { search: searchTerm }
});
return (
<div>
<input
type="text"
value={searchTerm}
onChange={(e) => setSearchTerm(e.target.value)}
placeholder="Search videos..."
className="w-full px-4 py-2 border rounded-lg"
/>
<VideoList videos={videos} loading={loading} />
</div>
);
};
```
## 🚀 Next Steps
1. **Start Small**: Begin by adding the video library page
2. **Integrate Gradually**: Add individual components to existing pages
3. **Customize**: Create specialized versions for your specific needs
4. **Extend**: Add new features like annotations, bookmarks, or sharing
The modular architecture makes it easy to start simple and grow the functionality over time!

View File

@@ -0,0 +1,175 @@
# Video Streaming Integration - Complete Implementation
This document provides a comprehensive overview of the completed video streaming integration with the USDA Vision Camera System.
## 🎯 Overview
The video streaming functionality has been successfully integrated into the Pecan Experiments React application, providing a complete video browsing and playback interface that connects to the USDA Vision Camera System API.
## ✅ Completed Features
### 1. Core Video Streaming Components
- **VideoList**: Displays filterable list of videos with pagination
- **VideoPlayer**: HTML5 video player with custom controls and range request support
- **VideoCard**: Individual video cards with thumbnails and metadata
- **VideoThumbnail**: Thumbnail component with caching and error handling
- **VideoModal**: Modal for video playback
- **Pagination**: Pagination controls for large video collections
### 2. API Integration
- **VideoApiService**: Complete API client for USDA Vision Camera System
- **Flexible Configuration**: Environment-based API URL configuration
- **Error Handling**: Comprehensive error handling with user-friendly messages
- **Performance Monitoring**: Built-in performance tracking and metrics
### 3. Performance Optimizations
- **Thumbnail Caching**: Intelligent caching system with LRU eviction
- **Performance Monitoring**: Real-time performance metrics and reporting
- **Efficient Loading**: Optimized API calls and data fetching
- **Memory Management**: Automatic cleanup and memory optimization
### 4. Error Handling & User Experience
- **Error Boundaries**: React error boundaries for graceful error handling
- **API Status Indicator**: Real-time API connectivity status
- **Loading States**: Comprehensive loading indicators
- **User Feedback**: Clear error messages and recovery options
### 5. Development Tools
- **Performance Dashboard**: Development-only performance monitoring UI
- **Debug Information**: Detailed error information in development mode
- **Cache Statistics**: Real-time cache performance metrics
## 🔧 Configuration
### Environment Variables
Create a `.env` file with the following configuration:
```bash
# USDA Vision Camera System API Configuration
# Default: http://vision:8000 (Docker container)
# For local development without Docker: http://localhost:8000
# For remote systems: http://192.168.1.100:8000
VITE_VISION_API_URL=http://vision:8000
```
### API Endpoints Used
- `GET /videos/` - List videos with filtering and pagination
- `GET /videos/{file_id}` - Get detailed video information
- `GET /videos/{file_id}/stream` - Stream video content with range requests
- `GET /videos/{file_id}/thumbnail` - Generate video thumbnails
- `GET /videos/{file_id}/info` - Get streaming technical details
- `POST /videos/{file_id}/validate` - Validate video accessibility
## 🚀 Usage
### Navigation
The video streaming functionality is accessible through:
- **Main Navigation**: "Video Library" menu item
- **Vision System**: Integrated with existing vision system dashboard
### Features Available
1. **Browse Videos**: Filter by camera, date range, and sort options
2. **View Thumbnails**: Automatic thumbnail generation with caching
3. **Play Videos**: Full-featured video player with seeking capabilities
4. **Performance Monitoring**: Real-time performance metrics (development mode)
### User Interface
- **Responsive Design**: Works on desktop and mobile devices
- **Dark/Light Theme**: Follows application theme preferences
- **Accessibility**: Keyboard navigation and screen reader support
## 🔍 Technical Implementation
### Architecture
```
src/features/video-streaming/
├── components/ # React components
│ ├── VideoList.tsx # Video listing with filters
│ ├── VideoPlayer.tsx # Video playback component
│ ├── VideoCard.tsx # Individual video cards
│ ├── VideoThumbnail.tsx # Thumbnail component
│ ├── VideoModal.tsx # Video playback modal
│ ├── ApiStatusIndicator.tsx # API status display
│ ├── VideoErrorBoundary.tsx # Error handling
│ └── PerformanceDashboard.tsx # Dev tools
├── hooks/ # Custom React hooks
│ ├── useVideoList.ts # Video list management
│ ├── useVideoPlayer.ts # Video player state
│ └── useVideoInfo.ts # Video information
├── services/ # API services
│ └── videoApi.ts # USDA Vision API client
├── utils/ # Utilities
│ ├── videoUtils.ts # Video helper functions
│ ├── thumbnailCache.ts # Thumbnail caching
│ └── performanceMonitor.ts # Performance tracking
├── types/ # TypeScript types
└── VideoStreamingPage.tsx # Main page component
```
### Key Technologies
- **React 18**: Modern React with hooks and concurrent features
- **TypeScript**: Full type safety and IntelliSense
- **Tailwind CSS**: Utility-first styling
- **HTML5 Video**: Native video playback with custom controls
- **Fetch API**: Modern HTTP client for API calls
## 📊 Performance Features
### Thumbnail Caching
- **LRU Cache**: Least Recently Used eviction policy
- **Memory Management**: Configurable memory limits
- **Automatic Cleanup**: Expired entry removal
- **Statistics**: Real-time cache performance metrics
### Performance Monitoring
- **Operation Tracking**: Automatic timing of API calls
- **Success Rates**: Track success/failure rates
- **Memory Usage**: Monitor cache memory consumption
- **Development Dashboard**: Visual performance metrics
### Optimizations
- **Range Requests**: Efficient video seeking with HTTP range requests
- **Lazy Loading**: Thumbnails loaded on demand
- **Error Recovery**: Automatic retry mechanisms
- **Connection Pooling**: Efficient HTTP connection reuse
## 🛠️ Development
### Testing the Integration
1. **Start the Application**: `npm run dev`
2. **Navigate to Video Library**: Use the sidebar navigation
3. **Check API Status**: Look for the connection indicator
4. **Browse Videos**: Filter and sort available videos
5. **Play Videos**: Click on video cards to open the player
### Development Tools
- **Performance Dashboard**: Click the performance icon (bottom-right)
- **Browser DevTools**: Check console for performance logs
- **Network Tab**: Monitor API calls and response times
### Troubleshooting
1. **API Connection Issues**: Check VITE_VISION_API_URL environment variable
2. **Video Not Playing**: Verify video file accessibility and format
3. **Thumbnail Errors**: Check thumbnail generation API endpoint
4. **Performance Issues**: Use the performance dashboard to identify bottlenecks
## 🔮 Future Enhancements
### Potential Improvements
- **Video Upload**: Add video upload functionality
- **Live Streaming**: Integrate live camera feeds
- **Video Analytics**: Add video analysis and metadata extraction
- **Offline Support**: Cache videos for offline viewing
- **Advanced Filters**: More sophisticated filtering options
### Integration Opportunities
- **Experiment Data**: Link videos to experiment data
- **Machine Learning**: Integrate with video analysis models
- **Export Features**: Video export and sharing capabilities
- **Collaboration**: Multi-user video annotation and comments
## 📝 Conclusion
The video streaming integration provides a robust, performant, and user-friendly interface for browsing and viewing videos from the USDA Vision Camera System. The implementation includes comprehensive error handling, performance optimizations, and development tools to ensure a smooth user experience and maintainable codebase.
The modular architecture allows for easy extension and customization, while the performance monitoring and caching systems ensure optimal performance even with large video collections.