Data Management
Export and import bookmarks in JSON or CSV formats
Enterprise Data Management
Professional-grade data management system featuring comprehensive export/import capabilities, advanced validation, batch processing, and multi-format support for enterprise workflows.
📤 Export System
JSON Export Format
The JSON format provides complete data preservation with metadata:
interface JSONExportFormat {
metadata?: {
exportedAt: string; // ISO 8601 timestamp
totalRecords: number; // Count of exported bookmarks
exportOptions: ExportOptions; // Configuration used
version: string; // Schema version
};
bookmarks: BookmarkData[]; // Array of bookmark objects
}✅ Advantages
- • Complete data preservation - All fields retained
- • Type safety - Maintains TypeScript interfaces
- • Metadata inclusion - Export context and versioning
- • Easy re-import - Native format compatibility
- • Hierarchical structure - Nested data support
Example JSON Export
{
"metadata": {
"exportedAt": "2024-01-15T10:30:45.123Z",
"totalRecords": 25,
"exportOptions": {
"format": "json",
"includeMetadata": true,
"filter": {
"tags": ["favorite", "study"],
"dateRange": {
"start": "2024-01-01",
"end": "2024-01-15"
}
}
},
"version": "1.0.0"
},
"bookmarks": [
{
"id": "bookmark_1695740123456_a1b2c3",
"title": "Inception: Dream Layer Explanation",
"timestamp": 3420.8,
"filepath": "/Movies/Inception.2010.BluRay.1080p.mp4",
"description": "Multi-level dream architecture explanation scene",
"createdAt": "2024-01-10T14:22:33.456Z",
"tags": ["sci-fi", "explanation", "favorite", "plot-critical"]
}
]
}JSON Best Practices
JSON export is recommended for backups and system migrations due to complete data preservation and easy re-import capabilities.
CSV Export Format
CSV format provides tabular data compatibility with spreadsheet applications:
interface CSVExportOptions extends ExportOptions {
selectedFields: string[]; // Fields to include
delimiter: ',' | ';' | '\t'; // Field separator
includeHeaders: boolean; // Column headers
quoteStyle: 'minimal' | 'all'; // Quote handling
}✅ Advantages
- • Universal compatibility - Works with Excel, Google Sheets
- • Compact size - Smaller file sizes
- • Human readable - Easy manual inspection
- • Customizable fields - Select specific columns
- • Delimiter options - Locale-appropriate separators
⚠️ Limitations
- • Flattened structure - Complex data simplified
- • Tag serialization - Arrays become delimited strings
- • No metadata - Context information lost
- • Type information - All data becomes text
- • Encoding issues - Special characters may need escaping
Example CSV Export
id,title,timestamp,filepath,description,createdAt,tags bookmark_1695740123456_a1b2c3,"Inception: Dream Layer Explanation","3420.8 (00:57:00)","/Movies/Inception.2010.BluRay.1080p.mp4","Multi-level dream architecture explanation scene","2024-01-10T14:22:33.456Z","sci-fi;explanation;favorite;plot-critical" bookmark_1695745678901_b2c3d4,"The Matrix: Red Pill Scene","1847.2 (00:30:47)","/Movies/The.Matrix.1999.1080p.mp4","Neo's choice between reality and illusion","2024-01-11T09:15:22.789Z","sci-fi;pivotal-moment;philosophy;choice" bookmark_1695750234567_c3d4e5,"Interstellar: Tesseract Sequence","8965.5 (02:29:25)","/Movies/Interstellar.2014.IMAX.mp4","Cooper communicates across time through gravity","2024-01-12T16:43:11.234Z","sci-fi;time;gravity;communication;climax"
CSV Configuration
CSV exports are highly customizable with field selection, delimiter choice, and encoding options to match your target application requirements.
Advanced Export Configuration
Comprehensive export options for enterprise requirements:
interface AdvancedExportOptions {
// Format and structure
format: 'json' | 'csv';
includeMetadata: boolean;
compressOutput?: boolean; // GZIP compression
filePath?: string; // Custom output path
// Filtering and selection
filter?: {
tags?: string[]; // Include only specific tags
dateRange?: { // Date range filtering
start: string;
end: string;
};
mediaType?: string; // Filter by media type
bookmarkIds?: string[]; // Specific bookmark selection
};
// CSV-specific options
csvOptions?: {
selectedFields: string[]; // Column selection
delimiter: ',' | ';' | '\t'; // Field separator
includeHeaders: boolean; // Header row
quoteStyle: 'minimal' | 'all'; // Quoting strategy
nullValue: string; // Null representation
};
// Performance options
batchSize?: number; // Processing batch size
progressCallback?: (progress: number) => void;
}🎯 Filtering Examples
// Export bookmarks from specific date range
const dateFilteredExport = await exportBookmarks({
format: 'csv',
includeMetadata: false,
filter: {
dateRange: {
start: '2024-01-01T00:00:00Z',
end: '2024-01-31T23:59:59Z'
}
}
});// Export only movie bookmarks
const mediaFilteredExport = await exportBookmarks({
format: 'json',
includeMetadata: true,
filter: {
mediaType: 'movie' // 'movie', 'tv', 'documentary', etc.
}
});// Complex combined filtering
const complexFilteredExport = await exportBookmarks({
format: 'csv',
includeMetadata: true,
filter: {
tags: ['educational', 'reference'],
dateRange: {
start: '2023-01-01T00:00:00Z',
end: '2024-12-31T23:59:59Z'
},
mediaType: 'documentary'
},
csvOptions: {
selectedFields: ['title', 'timestamp', 'description', 'tags'],
delimiter: '\t',
includeHeaders: true
}
});Performance Considerations
Large exports (>1000 bookmarks) use batch processing to maintain UI responsiveness. Progress callbacks provide real-time status updates.
High-Performance Batch Export
Optimized processing for large bookmark collections:
interface BatchExportResult {
success: boolean;
totalProcessed: number;
batchCount: number;
processingTimeMs: number;
filePath?: string;
error?: string;
}
// Large-scale export with progress tracking
const batchExport = await bookmarkManager.exportBatch({
format: 'json',
batchSize: 100, // Process 100 bookmarks per batch
compressionLevel: 6, // GZIP compression level
progressCallback: (progress) => {
console.log(`Export progress: ${progress.toFixed(1)}%`);
updateProgressBar(progress);
},
errorCallback: (error, batchIndex) => {
console.error(`Batch ${batchIndex} failed:`, error);
}
});🚀 Performance Features
- • Chunked processing - Prevents memory overflow
- • Progress tracking - Real-time status updates
- • Error recovery - Continue on non-critical errors
- • Compression - Reduce output file size
- • Cancellation - User can abort long operations
📊 Performance Metrics
- • Throughput: ~500 bookmarks/second
- • Memory usage: <50MB for 10k bookmarks
- • Compression ratio: 70-80% size reduction
- • Error rate: <0.1% for valid data
- • Cancellation time: <100ms response
Batch Size Optimization
Recommended batch sizes: 50-100 for JSON exports, 200-500 for CSV exports. Larger batches improve throughput but increase memory usage.
📥 Import System
JSON Import Processing
Import bookmarks from JSON files with comprehensive validation:
interface ImportOptions {
duplicateHandling: 'skip' | 'replace' | 'merge';
validateData: boolean; // Enable comprehensive validation
preserveIds: boolean; // Keep original IDs or regenerate
batchSize?: number; // Import batch size
progressCallback?: (progress: number) => void;
}
interface ImportResult {
success: boolean;
importedCount: number; // Successfully imported bookmarks
skippedCount: number; // Skipped due to duplicates
errorCount: number; // Failed imports
errors?: string[]; // Detailed error messages
duplicates?: number; // Detected duplicates
processingTimeMs?: number; // Import duration
}🔍 JSON Validation Pipeline
Schema Validation
Validates JSON structure and required fields:
const validationChecks = {
schemaVersion: validateSchemaVersion(data.metadata?.version),
requiredFields: validateRequiredFields(data.bookmarks),
dataTypes: validateDataTypes(data.bookmarks),
constraints: validateConstraints(data.bookmarks)
};Data Integrity Checks
Verifies bookmark data consistency:
- Unique IDs: Ensures no duplicate identifiers
- Valid timestamps: Checks timestamp ranges and formats
- File path validation: Verifies file accessibility
- Tag consistency: Validates tag format and limits
Duplicate Detection
Sophisticated duplicate identification:
interface DuplicateDetection {
exactMatch: boolean; // Identical ID or content
similarMatch: boolean; // Similar title/timestamp
conflictResolution: 'skip' | 'replace' | 'merge';
}✅ Successful Import Example
Import Result:
{
"success": true,
"importedCount": 23,
"skippedCount": 2,
"errorCount": 0,
"duplicates": 2,
"processingTimeMs": 245.7
}
Summary: 23 bookmarks imported successfully
- 2 duplicates skipped (based on duplicate handling setting)
- 0 errors encountered
- Processing completed in 245.7msVersion Compatibility
The import system supports backward compatibility with older export formats and automatic schema migration for seamless upgrades.
CSV Import Processing
Import from CSV files with intelligent field mapping:
interface CSVImportOptions extends ImportOptions {
fieldMapping?: Record<string, string>; // Map CSV columns to bookmark fields
delimiter?: ',' | ';' | '\t'; // CSV delimiter detection/override
hasHeaders?: boolean; // Header row presence
encoding?: 'utf-8' | 'latin1' | 'ascii'; // File encoding
skipEmptyRows?: boolean; // Handle empty rows
}🗺️ Automatic Field Mapping
The system intelligently maps CSV columns to bookmark fields:
Recognized Headers
- • Title: title, name, bookmark_title
- • Timestamp: timestamp, time, position, seconds
- • File Path: filepath, file, path, location
- • Description: description, notes, comment
- • Tags: tags, categories, labels
- • Created: created, date, timestamp_created
Format Conversion
- • Time formats: HH:MM:SS → seconds
- • Tag lists: "tag1;tag2;tag3" → array
- • Dates: Various formats → ISO 8601
- • Paths: Relative → absolute paths
- • Encoding: Auto-detect and convert
CSV Import Example
Title,Time,File Location,Notes,Categories
"Inception Dream Scene","00:57:00","/Movies/Inception.mp4","Multi-layer dream explanation","sci-fi;favorite;plot"
"Matrix Red Pill","30:47","/Movies/Matrix.mp4","Choice between realities","sci-fi;philosophy;pivotal"// Automatic field mapping detection
const mapping = {
"Title": "title",
"Time": "timestamp", // Converts HH:MM:SS to seconds
"File Location": "filepath",
"Notes": "description",
"Categories": "tags" // Splits on semicolon
};[
{
"id": "bookmark_generated_001",
"title": "Inception Dream Scene",
"timestamp": 3420.0,
"filepath": "/Movies/Inception.mp4",
"description": "Multi-layer dream explanation",
"tags": ["sci-fi", "favorite", "plot"],
"createdAt": "2024-01-15T10:30:45.123Z"
}
]CSV Best Practices
For best results, use descriptive column headers and consistent formatting. The system can handle various delimiters and encodings automatically.
Comprehensive Data Validation
Multi-layer validation ensures data quality and system stability:
interface ValidationPipeline {
// File-level validation
validateFileFormat(content: string): ValidationResult;
validateEncoding(content: string): ValidationResult;
validateSize(content: string): ValidationResult;
// Structure validation
validateSchema(data: any): ValidationResult;
validateFields(bookmarks: any[]): ValidationResult;
validateConsistency(bookmarks: BookmarkData[]): ValidationResult;
// Business logic validation
validateDuplicates(bookmarks: BookmarkData[]): ValidationResult;
validateReferences(bookmarks: BookmarkData[]): ValidationResult;
validateConstraints(bookmarks: BookmarkData[]): ValidationResult;
}✅ Validation Passes
- • Schema compliance - Correct data structure
- • Required fields - All mandatory data present
- • Data types - Correct type for each field
- • Value ranges - Timestamps within media duration
- • File references - Valid file paths
- • Uniqueness - No duplicate IDs
⚠️ Common Issues
- • Missing fields - Required data not provided
- • Invalid timestamps - Negative or excessive values
- • Malformed tags - Invalid characters or format
- • Encoding issues - Character set problems
- • Path problems - Inaccessible file locations
- • Duplicate conflicts - ID or content collisions
⚠️ Validation Warning Example
Validation completed with warnings:
{
"success": true,
"importedCount": 18,
"skippedCount": 3,
"errorCount": 2,
"warnings": [
"Line 5: Timestamp 7200 exceeds media duration (6800s) - adjusted",
"Line 12: Invalid characters in tag 'sci-fi!' - cleaned to 'sci-fi'",
"Line 18: Missing description field - using empty string"
],
"errors": [
"Line 8: Required field 'title' is empty - bookmark skipped",
"Line 21: Invalid timestamp '-30' - bookmark skipped"
]
}Robust Error Handling
Comprehensive error recovery and reporting system:
interface ErrorHandling {
// Error categories
criticalErrors: string[]; // Stop import immediately
recoverableErrors: string[]; // Skip item, continue import
warnings: string[]; // Import with modifications
// Recovery strategies
autoCorrection: boolean; // Attempt automatic fixes
skipOnError: boolean; // Continue despite errors
rollbackOnFailure: boolean; // Undo partial imports
// Reporting
detailedLog: boolean; // Full error context
progressTracking: boolean; // Real-time status updates
}🔧 Error Recovery Strategies
Automatic fixes for common issues:
const corrections = {
// Timestamp normalization
negativeTimestamp: (ts) => Math.max(0, ts),
excessiveTimestamp: (ts, max) => Math.min(ts, max),
// Title cleanup
emptyTitle: (filepath) => extractTitleFromPath(filepath),
invalidChars: (title) => title.replace(/[^\w\s-]/g, ''),
// Tag normalization
invalidTags: (tags) => tags.filter(isValidTag).map(cleanTag),
duplicateTags: (tags) => [...new Set(tags)],
// Path resolution
relativePaths: (path, basePath) => resolve(basePath, path),
missingFiles: (path) => findSimilarFile(path)
};Continue import with reduced functionality:
const degradationStrategies = {
// Partial data acceptance
missingOptionalFields: 'useDefaults',
invalidOptionalData: 'omitField',
// Fallback behaviors
unreadableFiles: 'markAsUnavailable',
invalidTimestamps: 'useMediaStart',
malformedTags: 'convertToString',
// Quality preservation
maintainStructure: true,
preserveOriginalData: true,
logAllChanges: true
};Interactive error resolution:
interface UserInterventionOptions {
// Prompt for decisions
duplicateResolution: 'prompt' | 'auto';
pathResolution: 'prompt' | 'skip';
validationFailures: 'prompt' | 'fix';
// Batch operations
applyToAll: boolean;
rememberChoices: boolean;
// Preview and confirmation
showPreview: boolean;
requireConfirmation: boolean;
}❌ Critical Error Example
Import failed with critical errors:
{
"success": false,
"importedCount": 0,
"errorCount": 1,
"errors": [
"File format validation failed: Invalid JSON structure at line 15, column 32",
"Critical schema mismatch: Required field 'bookmarks' not found in root object"
],
"recommendation": "Please verify the file format and try again with a valid bookmark export file"
}Error Recovery
The system employs progressive error handling - attempting auto-correction first, then graceful degradation, and finally user intervention for critical issues.
Export Functionality
Supported Export Formats
The plugin supports multiple export formats:
- JSON - Full data with all metadata
- CSV - Spreadsheet-compatible format
- XML - Structured markup format (planned)
Exporting Bookmarks
Open Export Dialog
- Click "Export" button in the standalone window
- Or use the plugin menu: "Export Bookmarks"
Choose Format
Select your preferred export format:
- JSON for full data preservation
- CSV for spreadsheet applications
Configure Options
- All bookmarks or selected bookmarks only
- Include descriptions and tags
- Date range filtering
Save File
Choose location and filename for your export
JSON Export Format
{
"version": "1.0",
"exportDate": "2024-01-01T12:00:00Z",
"bookmarks": [
{
"id": "bookmark-123",
"title": "Epic Scene",
"timestamp": 1847.5,
"filePath": "/Movies/Great.Movie.2023.mp4",
"description": "Amazing action sequence",
"tags": ["action", "favorite"],
"createdAt": "2024-01-01T10:30:00Z",
"updatedAt": "2024-01-01T10:30:00Z"
}
]
}CSV Export Format
| Title | Timestamp | File Path | Description | Tags | Created |
|---|---|---|---|---|---|
| Epic Scene | 1847.5 | /Movies/Great.Movie.2023.mp4 | Amazing action sequence | action,favorite | 2024-01-01 10:30:00 |
Import Functionality
Supported Import Sources
- JSON files exported from this plugin
- CSV files with proper column headers
- Legacy bookmark formats (planned)
- Other media players (planned)
Importing Bookmarks
Open Import Dialog
- Click "Import" button in the standalone window
- Or use the plugin menu: "Import Bookmarks"
Select File
Choose the file containing your bookmark data
Configure Import
- Merge with existing bookmarks or replace all
- Duplicate handling strategy
- Field mapping for CSV files
Review and Import
Preview the data and confirm the import
Import Options
Duplicate Handling
When importing bookmarks that might duplicate existing ones:
- Skip duplicates - Keep existing bookmarks
- Update existing - Overwrite with imported data
- Keep both - Allow duplicates with different IDs
- Ask for each - Manual decision for each duplicate
Field Mapping (CSV)
Map CSV columns to bookmark fields:
CSV Column → Bookmark Field
Title → title
Time → timestamp
Path → filePath
Notes → description
Categories → tagsData Migration
Upgrading Data Format
The plugin automatically handles data format upgrades:
- Backup creation before migration
- Format conversion to new schema
- Validation of migrated data
- Rollback option if issues occur
Cross-Platform Compatibility
Bookmarks can be transferred between different systems:
- Path normalization for different file systems
- Relative path conversion when possible
- Missing file detection and reporting
Backup and Recovery
Automatic Backups
The plugin automatically creates backups:
- Daily backups of all bookmark data
- Pre-operation backups before major changes
- Backup rotation (keeps last 7 days)
- Manual backup option available
Backup Location
Backups are stored in:
~/.config/iina/plugins/bookmarks/backups/
├── daily/
│ ├── bookmarks-2024-01-01.json
│ ├── bookmarks-2024-01-02.json
│ └── ...
└── manual/
└── user-backup-2024-01-01.jsonRecovery Process
To restore from backup:
Access Recovery
- Settings → Data Management → Restore from Backup
- Or manually copy backup file to data directory
Select Backup
Choose from available automatic or manual backups
Confirm Restore
Review backup contents and confirm restoration
Restoring from backup will replace all current bookmark data. Create a manual backup first if you want to preserve recent changes.
Data Security
Privacy Protection
- Local storage only - No cloud synchronization by default
- No external services - All processing happens locally
- File permissions - Restricted access to bookmark files
Data Validation
All imported data is validated:
- Schema validation - Ensures proper data structure
- Type checking - Validates data types
- Sanitization - Removes potentially harmful content
- Integrity checks - Verifies data consistency
Advanced Data Operations
Bulk Operations
Perform operations on multiple bookmarks:
// Example: Export bookmarks by tag
const actionBookmarks = bookmarks.filter(b =>
b.tags.includes('action')
);
await exportBookmarks(actionBookmarks, 'json');Data Analysis
The plugin can provide insights about your bookmarks:
- Most bookmarked media files
- Tag usage statistics
- Bookmark frequency over time
- Average timestamp positions
Scripting Support
For advanced users, the plugin supports scripting:
// Export all bookmarks from last month
const lastMonth = new Date();
lastMonth.setMonth(lastMonth.getMonth() - 1);
const recentBookmarks = bookmarks.filter(b =>
new Date(b.createdAt) > lastMonth
);
await exportBookmarks(recentBookmarks, 'csv');Troubleshooting
Common Import Issues
| Issue | Solution |
|---|---|
| "Invalid file format" | Ensure file is valid JSON/CSV |
| "Missing required fields" | Check column headers in CSV |
| "Duplicate bookmarks" | Choose duplicate handling strategy |
| "Large file timeout" | Split large files into smaller chunks |
Export Problems
| Issue | Solution |
|---|---|
| "Export failed" | Check available disk space |
| "Permission denied" | Verify write permissions |
| "Empty export" | Ensure bookmarks are selected |
| "Corrupted file" | Try different export format |
For persistent data issues, check the plugin logs in the IINA console or create a support ticket with relevant error messages.
Best Practices
Regular Backups
- Enable automatic backups in settings
- Create manual backups before major changes
- Test restore process periodically
- Store backups in multiple locations
Data Organization
- Use consistent tagging for better organization
- Clean up unused tags periodically
- Review and update bookmark descriptions
- Remove outdated bookmarks regularly
Performance Tips
- Export in batches for large collections
- Use appropriate formats for your use case
- Monitor file sizes to prevent performance issues
- Clean up backup files when no longer needed