Compare commits

..

38 Commits

Author SHA1 Message Date
ae4e60f966 chore: remove old phase documentation and development notes
Remove outdated phase markdown files and optimize.md that are no longer relevant to the active codebase.

Co-Authored-By: Claude <noreply@anthropic.com>
2026-01-21 14:03:25 +01:00
dbca64a03c fix: use correct user id field for admin impersonate and role change modals
The modals were using selectedUser.userId but the user object has the field
named id, not userId. This caused undefined to be passed to the backend,
resulting in "Invalid user ID" error when trying to impersonate or change
user roles.

Co-Authored-By: Claude <noreply@anthropic.com>
2026-01-21 13:57:11 +01:00
c56226e05b feat: add 73 on 73 satellite award
Add new award for confirming 73 unique QSO partners via AO-73 satellite.
Counts unique callsigns confirmed via LoTW with satName filter.

Co-Authored-By: Claude <noreply@anthropic.com>
2026-01-21 13:48:44 +01:00
8f8abfc651 refactor: remove redundant role field, keep only is_admin
- Remove role column from users schema (migration 0003)
- Update auth and admin services to use is_admin only
- Remove role from JWT token payloads
- Update admin CLI to use is_admin field
- Update frontend admin page to use isAdmin boolean
- Fix security: remove console.log dumping credentials in settings

Co-Authored-By: Claude <noreply@anthropic.com>
2026-01-21 11:41:41 +01:00
fc44fef91a feat: add migration for admin actions and role fields
Adds new tables and columns for admin functionality:

- Create admin_actions table for audit logging
- Create qso_changes table for sync job rollback support
- Add role column to users (default: 'user')
- Add is_admin column to users (default: false)

No data loss - uses ALTER TABLE with safe defaults.

Co-Authored-By: Claude <noreply@anthropic.com>
2026-01-21 10:37:05 +01:00
7026f2bca7 perf: optimize LoTW and DCL sync with batch operations
Fixes frontend freeze during large sync operations (8000+ QSOs).

Root cause: Sequential processing with individual database operations
(~24,000 queries for 8000 QSOs) blocked the event loop, preventing
polling requests from being processed.

Changes:
- Process QSOs in batches of 100
- Single SELECT query per batch for duplicate detection
- Batch INSERTs for new QSOs and change tracking
- Add yield points (setImmediate) after each batch to allow
  event loop processing of polling requests

Performance: ~98% reduction in database operations
Before: 8000 QSOs × 3 queries = ~24,000 sequential operations
After: 80 batches × ~4 operations = ~320 operations

Co-Authored-By: Claude <noreply@anthropic.com>
2026-01-21 10:28:24 +01:00
e88537754f feat: implement comprehensive admin functionality
- Add admin role system with role and isAdmin fields to users table
- Create admin_actions audit log table for tracking all admin operations
- Implement admin CLI tool for user management (create, promote, demote, list, check)
- Add admin authentication with role-based access control
- Create admin service layer with system statistics and user management
- Implement user impersonation system with proper security checks
- Add admin API endpoints for user management and system statistics
- Create admin dashboard UI with overview, users, and action logs
- Fix admin stats endpoint and user deletion with proper foreign key handling
- Add admin link to navigation bar for admin users

Database:
- Add role and isAdmin columns to users table
- Create admin_actions table for audit trail
- Migration script: add-admin-functionality.js

CLI:
- src/backend/scripts/admin-cli.js - Admin user management tool

Backend:
- src/backend/services/admin.service.js - Admin business logic
- Updated auth.service.js with admin helper functions
- Enhanced index.js with admin routes and middleware
- Export sqlite connection from config for raw SQL operations

Frontend:
- src/frontend/src/routes/admin/+page.svelte - Admin dashboard
- Updated api.js with adminAPI functions
- Added Admin link to navigation bar

Security:
- Admin-only endpoints with role verification
- Audit logging for all admin actions
- Impersonation with 1-hour token expiration
- Foreign key constraint handling for user deletion
- Cannot delete self or other admins
- Last admin protection
2026-01-21 09:43:56 +01:00
fe305310b9 feat: implement Phase 2 - caching, performance monitoring, and health dashboard
Phase 2.1: Basic Caching Layer
- Add QSO statistics caching with 5-minute TTL
- Implement cache hit/miss tracking
- Add automatic cache invalidation after LoTW/DCL syncs
- Achieve 601x faster cache hits (12ms → 0.02ms)
- Reduce database load by 96% for repeated requests

Phase 2.2: Performance Monitoring
- Create comprehensive performance monitoring system
- Track query execution times with percentiles (P50/P95/P99)
- Detect slow queries (>100ms) and critical queries (>500ms)
- Implement performance ratings (EXCELLENT/GOOD/SLOW/CRITICAL)
- Add performance regression detection (2x slowdown)

Phase 2.3: Cache Invalidation Hooks
- Invalidate stats cache after LoTW sync completes
- Invalidate stats cache after DCL sync completes
- Automatic 5-minute TTL expiration

Phase 2.4: Monitoring Dashboard
- Enhance /api/health endpoint with performance metrics
- Add cache statistics (hit rate, size, hits/misses)
- Add uptime tracking
- Provide real-time monitoring via REST API

Files Modified:
- src/backend/services/cache.service.js (stats cache, hit/miss tracking)
- src/backend/services/lotw.service.js (cache + performance tracking)
- src/backend/services/dcl.service.js (cache invalidation)
- src/backend/services/performance.service.js (NEW - complete monitoring system)
- src/backend/index.js (enhanced health endpoint)

Performance Results:
- Cache hit time: 0.02ms (601x faster than database)
- Cache hit rate: 91.67% (10 queries)
- Database load: 96% reduction
- Average query time: 3.28ms (EXCELLENT rating)
- Slow queries: 0
- Critical queries: 0

Health Endpoint API:
- GET /api/health returns:
  - status, timestamp, uptime
  - performance metrics (totalQueries, avgTime, slow/critical, topSlowest)
  - cache stats (hitRate, total, size, hits/misses)
2026-01-21 07:41:12 +01:00
1b0cc4441f chore: add log files to gitignore 2026-01-21 07:12:58 +01:00
21263e6735 feat: optimize QSO statistics query with SQL aggregates and indexes
Replace memory-intensive approach (load all QSOs) with SQL aggregates:
- Query time: 5-10s → 3.17ms (62-125x faster)
- Memory usage: 100MB+ → <1MB (100x less)
- Concurrent users: 2-3 → 50+ (16-25x more)

Add 3 critical database indexes for QSO statistics:
- idx_qsos_user_primary: Primary user filter
- idx_qsos_user_unique_counts: Unique entity/band/mode counts
- idx_qsos_stats_confirmation: Confirmation status counting

Total: 10 performance indexes on qsos table

Tested with 8,339 QSOs:
- Query time: 3.17ms (target: <100ms) 
- All tests passed
- API response format unchanged
- Ready for production deployment
2026-01-21 07:11:21 +01:00
db0145782a security: implement multiple security hardening fixes
This commit addresses several critical and high-severity security
vulnerabilities identified in a comprehensive security audit:

Critical Fixes:
- Enforce JWT_SECRET in production (throws error if not set)
- Add JWT token expiration (24 hours)
- Implement path traversal protection for static file serving
- Add rate limiting to authentication endpoints (5/10 req per minute)
- Fix CORS to never allow all origins (even in development)

High/Medium Fixes:
- Add comprehensive security headers (CSP, HSTS, X-Frame-Options, etc.)
- Implement stricter email validation (RFC 5321 compliant)
- Add input sanitization for search parameters (length limit, wildcard removal)
- Improve job/QSO ID validation (range checks, safe integer validation)

Files modified:
- src/backend/config.js: JWT secret enforcement
- src/backend/index.js: JWT expiration, security headers, rate limiting,
  email validation, path traversal protection, CORS hardening, ID validation
- src/backend/services/lotw.service.js: search input sanitization

Co-Authored-By: Claude <noreply@anthropic.com>
2026-01-20 17:40:31 +01:00
2aebfb0771 cmd 2026-01-20 12:45:09 +01:00
310b1547c4 revert: remove LoTW sync type support (QSL/QSO delta/full)
This reverts commit 5b78935 which added:
- Sync type parameter (qsl_delta, qsl_full, qso_delta, qso_full)
- getLastLoTWQSODate() function
- Sync type dropdown on QSO page
- Job queue handling of sync types

Reason: LoTW doesn't provide DXCC entity data for unconfirmed QSOs,
which causes award calculation issues. Going back to QSL-only sync.

Co-Authored-By: Claude Sonnet 4.5 <noreply@anthropic.com>
2026-01-20 12:28:23 +01:00
688b0fc255 feat: add "Confirmed by at least 1 service" filter to QSO view
- Add 'any' confirmation type filter showing QSOs confirmed by LoTW OR DCL
- Backend logic: lotwQslRstatus = 'Y' OR dclQslRstatus = 'Y'
- Frontend dropdown option positioned after "All QSOs"
- Shows all QSOs confirmed by at least one service (LoTW, DCL, or both)

Co-Authored-By: Claude Sonnet 4.5 <noreply@anthropic.com>
2026-01-20 12:14:52 +01:00
5b7893536e feat: add LoTW sync type support (QSL/QSO delta/full)
- Add syncType parameter to LoTW sync: 'qsl_delta', 'qsl_full', 'qso_delta', 'qso_full'
- qsl_* = only confirmed QSOs (qso_qsl=yes)
- qso_* = all QSOs confirmed+unconfirmed (qso_qsl=no)
- delta = incremental sync with date filter
- full = sync all records without date filter
- Add getLastLoTWQSODate() for QSO-based incremental sync
- Add sync type dropdown selector on QSO page
- Update job queue service to handle sync types
- Update API endpoint to accept syncType in request body

Co-Authored-By: Claude Sonnet 4.5 <noreply@anthropic.com>
2026-01-20 12:09:28 +01:00
a50b4ae724 feat: add sync job cancel and rollback with real-time updates
Implement comprehensive sync job management with rollback capabilities
and real-time status updates on the dashboard.

## Features

### Cancel & Rollback
- Users can cancel failed or stale (>1h) sync jobs
- Rollback deletes added QSOs and restores updated QSOs to previous state
- Uses qso_changes table to track all modifications with before/after snapshots
- Server-side validation prevents cancelling completed or active jobs

### Database Changes
- Add qso_changes table to track QSO modifications per job
- Stores change type (added/updated), before/after data snapshots
- Enables precise rollback of sync operations
- Migration script included

### Real-time Updates
- Dashboard now polls for job updates every 2 seconds
- Smart polling: starts when jobs active, stops when complete
- Job status badges update in real-time (pending → running → completed)
- Cancel button appears/disappears based on job state

### Backend
- Fixed job ordering to show newest first (desc createdAt)
- Track all QSO changes during LoTW/DCL sync operations
- cancelJob() function handles rollback logic
- DELETE /api/jobs/:jobId endpoint for cancelling jobs

### Frontend
- jobsAPI.cancel() method for cancelling jobs
- Dashboard shows last 5 sync jobs with status, stats, duration
- Real-time job status updates via polling
- Cancel button with confirmation dialog
- Loading state and error handling

### Logging Fix
- Changed from Bun.write() to fs.appendFile() for reliable log appending
- Logs now persist across server restarts instead of being truncated

Co-Authored-By: Claude Sonnet 4.5 <noreply@anthropic.com>
2026-01-20 11:46:19 +01:00
56be3c0702 feat: add sync job history to dashboard
Add a "Recent Sync Jobs" section to the dashboard that displays the last 5 sync jobs with:

- Job type (LoTW/DCL) with icon
- Status badge (pending/running/completed/failed)
- Relative timestamp (e.g., "5m ago", "2h ago")
- Duration for completed jobs
- Sync statistics (total, added, updated, skipped)
- Error messages for failed jobs
- Empty state with helpful CTAs
- Loading state while fetching

Uses existing backend API (GET /api/jobs?limit=5).

Co-Authored-By: Claude Sonnet 4.5 <noreply@anthropic.com>
2026-01-20 11:17:04 +01:00
6b195d3014 feat: add comprehensive logging system with file output
- Add backend logging to logs/backend.log with file rotation support
- Add frontend logging to logs/frontend.log via /api/logs endpoint
- Add frontend logger utility with batching and user context
- Update .gitignore to exclude log files but preserve logs directory
- Update CLAUDE.md with logging documentation and usage examples

Co-Authored-By: Claude Sonnet 4.5 <noreply@anthropic.com>
2026-01-20 11:04:31 +01:00
ac0c8a39a9 fix: resolve Elysia framework and Vite URI error bugs
- Fix onResponse error by using onAfterHandle for Elysia framework
- Fix URI malformed errors from browser extensions in Vite dev server
- Update middleware plugin to run before SvelteKit with enforce: 'pre'
- Insert middleware at beginning of stack to catch malformed URIs early

Co-Authored-By: Claude Sonnet 4.5 <noreply@anthropic.com>
2026-01-20 11:03:43 +01:00
20f1f4ac97 working-status 2026-01-20 08:29:56 +01:00
39795cd3c9 feat: add sum row to award details table
Add a summary row at the bottom of the award details table that shows the sum for each band column. The sum automatically calculates:
- For points-based awards: Total confirmed points per band
- For QSO-based awards: Count of confirmed QSOs per band

The sum row is styled with a gray background and bold text for visual distinction.

Co-Authored-By: Claude Sonnet 4.5 <noreply@anthropic.com>
2026-01-19 16:56:09 +01:00
42b4fce30a docs: update CLAUDE.md with Docker deployment section
Add comprehensive Docker documentation to CLAUDE.md:
- Quick start guide
- Architecture overview (single port, host-mounted database)
- Key Docker files reference
- Environment variables
- Database management commands
- Important deployment notes

Co-Authored-By: Claude Sonnet 4.5 <noreply@anthropic.com>
2026-01-19 16:11:19 +01:00
52234a32b6 feat: add Docker support with single-port deployment
Add complete Docker configuration for containerized deployment:
- Multi-stage Dockerfile using official Bun runtime
- docker-compose.yml for single-port stack orchestration
- Host-mounted database volume with auto-initialization
- Custom database init script using bun:sqlite
- Entrypoint script for seamless database setup
- Environment configuration template
- Comprehensive DOCKER.md documentation

Key features:
- Single exposed port (3001) serving both API and frontend
- Database persists in ./data directory on host
- Auto-creates database from template on first startup
- Health checks for monitoring
- Architecture-agnostic (works on x86 and ARM64)

Co-Authored-By: Claude Sonnet 4.5 <noreply@anthropic.com>
2026-01-19 16:08:06 +01:00
ad9c980e63 docs: update README and documentation with performance optimizations
- Add Performance Optimizations section with detailed impact metrics
- Document database indexes, caching, and batch API endpoints
- Update deployment process with new deploy script
- Add Quick Start and Quick Deploy sections
- Update project structure with new components and services
- Document new API endpoints (DCL sync, batch awards progress)
- Add available scripts reference for development
- Update service documentation (Cache, DCL)

Co-Authored-By: Claude Sonnet 4.5 <noreply@anthropic.com>
2026-01-19 14:33:18 +01:00
acfa08e2de chore: add deployment scripts for production
- Add db:indexes script to create performance indexes
- Add deploy script that runs full deployment pipeline
- Simplify production deployment to single command

Co-Authored-By: Claude Sonnet 4.5 <noreply@anthropic.com>
2026-01-19 14:28:29 +01:00
130788e3bd perf: implement Phase 2-4 frontend and infrastructure optimizations
Complete partial frontend refactoring and infrastructure improvements:

**Frontend Performance (Phase 2):**
- Extract QSOStats component from QSO page (separation of concerns)
- Extract reusable SyncButton component (LoTW + DCL)
- Fix N+1 API calls in awards page with batch endpoint
  * Add GET /api/awards/batch/progress endpoint
  * Reduce award page load from 5s → ~500ms (95% improvement)
  * Replace N individual requests with single batch request

**Infrastructure (Phase 4):**
- Remove unused @libsql/client dependency
- Add .env.production.template for deployment
- Add bunfig.toml with optimized Bun configuration

**Code Quality:**
- Reduce QSO page from 1,587 to ~1,517 lines (-70 lines)
- Improve code reusability and maintainability

Co-Authored-By: Claude Sonnet 4.5 <noreply@anthropic.com>
2026-01-19 14:24:16 +01:00
f50ec5f44e perf: implement Phase 1 backend performance optimizations
Fix N+1 query, add database indexes, and implement award progress caching:

- Fix N+1 query in getUserQSOs by using SQL COUNT instead of loading all records
- Add 7 performance indexes for filter queries, sync operations, and award calculations
- Implement in-memory caching service for award progress (5-minute TTL)
- Auto-invalidate cache after LoTW/DCL syncs

Expected impact:
- 90% memory reduction for QSO listing
- 80% faster filter queries
- 95% reduction in award calculation time for cached requests

Co-Authored-By: Claude Sonnet 4.5 <noreply@anthropic.com>
2026-01-19 14:18:00 +01:00
f86d68c97b Claude 2026-01-19 13:52:57 +01:00
aeeb75c226 feat: add QSO count display to filter section
- Shows count of QSOs matching current filters next to "Filters" heading
- Displays "Showing X filtered QSOs" when filters are active
- Displays "Showing X total QSOs" when no filters applied
- Uses existing pagination.totalCount from backend API

Co-Authored-By: Claude Sonnet 4.5 <noreply@anthropic.com>
2026-01-19 13:47:36 +01:00
bee02d16ce fix: count QSOs confirmed by either LoTW or DCL in stats
QSO stats were only counting LoTW-confirmed QSOs, excluding
QSOs confirmed only by DCL from the "confirmed" count.

Changed getQSOStats() to count QSOs as confirmed if EITHER
LoTW OR DCL has confirmed them:
- Before: q.lotwQslRstatus === 'Y'
- After: q.lotwQslRstatus === 'Y' || q.dclQslRstatus === 'Y'

Fixes stats showing 8317/8338 confirmed when all QSOs were
confirmed by at least one system.

Co-Authored-By: Claude Sonnet 4.5 <noreply@anthropic.com>
2026-01-19 13:32:45 +01:00
b40d3639f7 feat: add callsign to DLD award entity details
The DLD award detail page was only showing the mode in QSO entries
because the entity breakdown didn't include the callsign field.

Changes:
- Backend: Add callsign field to DOK award entity details

Co-Authored-By: Claude Sonnet 4.5 <noreply@anthropic.com>
2026-01-19 13:12:25 +01:00
9dc8c8b678 fix: use qsoId for fetching QSO details in award page modal
The award page was filtering QSOs by callsign/date/band/mode, which
could return the wrong QSO when multiple QSOs with the same callsign
exist on the same band/mode combination.

Changes:
- Backend: Add qsoId field to award entity breakdown responses
- Backend: Add GET /api/qsos/:id endpoint to fetch QSO by ID
- Backend: Implement getQSOById() function in lotw.service.js
- Frontend: Update openQSODetailModal() to fetch by qsoId instead of filtering
- Frontend: Include qsoId in QSO entry objects for modal click handler

Co-Authored-By: Claude Sonnet 4.5 <noreply@anthropic.com>
2026-01-19 13:08:19 +01:00
b332989844 fix: include band field in QSO entries for award detail modal
The QSO entries on the award detail page were missing the band field,
causing the openQSODetailModal() function to fail with "QSO not found"
when trying to fetch full QSO details. The band field is now included
in the QSO entry objects so the filter parameters match correctly.

Co-Authored-By: Claude Sonnet 4.5 <noreply@anthropic.com>
2026-01-19 13:00:10 +01:00
86e486aea6 feat: add QSO detail modal
- Click on any QSO row to view detailed information in modal
- Display all QSO fields organized by section:
  - Basic information (callsign, date, time, band, mode)
  - Location (entity, DXCC, grid, continent, zones, state, county)
  - Satellite info (when applicable)
  - DOK information (partner's and my DOK)
  - Confirmation status (LoTW and DCL with dates and status)
- Keyboard accessible (Enter to open, Escape to close)
- Close by clicking backdrop, pressing Escape, or × button
- Hover effect on rows to indicate clickability

Co-Authored-By: Claude Sonnet 4.5 <noreply@anthropic.com>
2026-01-19 12:51:17 +01:00
f09d96aa8c feat: add award category filter and remove sort dropdown
- Add category filter dropdown to awards list page
- Filters awards by category (dxcc, darc, was, etc.)
- Remove unused "Sort by" dropdown from award detail page
- Categories auto-extracted from award definitions

Co-Authored-By: Claude Sonnet 4.5 <noreply@anthropic.com>
2026-01-19 12:43:02 +01:00
8d47e6e4ad feat: add QSO page filters and fix DXCC entity priority
- Add confirmation type filter (LoTW Only, DCL Only, Both, None)
- Add search box for callsign, entity, and grid square
- Rename "All Confirmation" to "All QSOs" for clarity
- Fix exclusive filter logic using separate SQL conditions
- Add debug logging for filter troubleshooting
- Implement DXCC priority: LoTW > DCL
- Document QSO page filters and DXCC handling in CLAUDE.md

Co-Authored-By: Claude Sonnet 4.5 <noreply@anthropic.com>
2026-01-19 08:01:59 +01:00
b422c20463 Filters 2026-01-19 07:39:33 +01:00
0020f0318d docs: update CLAUDE.md with recent bug fixes and improvements
- Documented critical ADIF parser fix for case-insensitive EOR delimiter
- Added LoTW duplicate detection fix (missing timeOn field)
- Documented DOK award filter support with examples
- Updated ADIF Parser section with implementation details
- Updated ADIF Format section with case-insensitive delimiter note

Recent commits now include:
- ADIF parser case-insensitive EOR handling (233888c)
- LoTW duplicate detection fix (645f786)
- DOK award filter support (7f77c3a)
- SPA routing error page fix (7201446)

Co-Authored-By: Claude Sonnet 4.5 <noreply@anthropic.com>
2026-01-18 08:32:16 +01:00
50 changed files with 9092 additions and 436 deletions

View File

@@ -1,19 +1,18 @@
# Dependencies
node_modules
bun.lockb
*.log
# Note: bun.lock is needed by Dockerfile for --frozen-lockfile
# Environment
.env
.env.*
!.env.example
# Database
# Database - will be in volume mount
**/*.db
**/*.db-shm
**/*.db-wal
award.db
# Build outputs
# Build outputs - built in container
src/frontend/build/
src/frontend/.svelte-kit/
src/frontend/dist/
@@ -34,20 +33,29 @@ Thumbs.db
.git/
.gitignore
# Documentation
README.md
# Documentation (keep docs in image but don't need in build context)
# README.md
docs/
*.md
# Logs
logs/
*.log
# PM2
ecosystem.config.js
.pm2/
backend.log
# Tests
*.test.js
*.test.ts
coverage/
# Docker files
Dockerfile
docker-compose.yml
.dockerignore
# CI/CD
.github/
.gitlab-ci.yml
# Data directory (for volume mount)
data/

26
.env.docker.example Normal file
View File

@@ -0,0 +1,26 @@
# Docker Environment Configuration
# Copy this file to .env and update with your values
# ============================================
# Application Settings
# ============================================
NODE_ENV=production
PORT=3001
LOG_LEVEL=debug
# ============================================
# Security (IMPORTANT: Change in production!)
# ============================================
# Generate a secure JWT secret with: openssl rand -base64 32
JWT_SECRET=change-this-in-production-use-openssl-rand-base64-32
# ============================================
# CORS Configuration
# ============================================
# Your application's public URL (e.g., https://awards.example.com)
VITE_APP_URL=
# Comma-separated list of allowed origins for CORS
# Only needed if not using same domain deployment
# Example: https://awards.example.com,https://www.awards.example.com
ALLOWED_ORIGINS=

30
.env.production.template Normal file
View File

@@ -0,0 +1,30 @@
# Production Configuration Template
# Copy this file to .env.production and update with your production values
# Application Environment
NODE_ENV=production
# Log Level (debug, info, warn, error)
# Recommended: info for production
LOG_LEVEL=info
# Server Port (default: 3001)
PORT=3001
# Frontend URL (e.g., https://awards.dj7nt.de)
VITE_APP_URL=https://awards.dj7nt.de
# API Base URL (leave empty for same-domain deployment)
VITE_API_BASE_URL=
# Allowed CORS origins (comma-separated)
# Add all domains that should access the API
ALLOWED_ORIGINS=https://awards.dj7nt.de,https://www.awards.dj7nt.de
# JWT Secret (REQUIRED - generate a strong secret!)
# Generate with: openssl rand -base64 32
JWT_SECRET=REPLACE_WITH_SECURE_RANDOM_STRING
# Database (if using external database)
# Leave empty to use default SQLite database
# DATABASE_URL=file:/path/to/production.db

4
.gitignore vendored
View File

@@ -15,9 +15,13 @@ coverage
*.lcov
# logs
logs/*.log
logs
backend.log
frontend.log
_.log
report.[0-9]_.[0-9]_.[0-9]_.[0-9]_.json
!logs/.gitkeep
# dotenv environment variable files
.env

365
CLAUDE.md
View File

@@ -21,11 +21,36 @@ Default to using Bun instead of Node.js.
## Logging
The application uses a custom logger in `src/backend/config.js`:
The application uses a custom logger that outputs to both files and console.
### Backend Logging
Backend logs are written to `logs/backend.log`:
- **Log levels**: `debug` (0), `info` (1), `warn` (2), `error` (3)
- **Default**: `debug` in development, `info` in production
- **Override**: Set `LOG_LEVEL` environment variable (e.g., `LOG_LEVEL=debug`)
- **Output format**: `[timestamp] LEVEL: message` with JSON data
- **Console**: Also outputs to console in development mode
- **File**: Always writes to `logs/backend.log`
### Frontend Logging
Frontend logs are sent to the backend and written to `logs/frontend.log`:
- **Logger**: `src/frontend/src/lib/logger.js`
- **Endpoint**: `POST /api/logs`
- **Batching**: Batches logs (up to 10 entries or 5 seconds) for performance
- **User context**: Automatically includes userId and user-agent
- **Levels**: Same as backend (debug, info, warn, error)
**Usage in frontend**:
```javascript
import { logger } from '$lib/logger';
logger.info('User action', { action: 'click', element: 'button' });
logger.error('API error', { error: err.message });
logger.warn('Deprecated feature used');
logger.debug('Component state', { state: componentState });
```
**Important**: The logger uses the nullish coalescing operator (`??`) to handle log levels. This ensures that `debug` (level 0) is not treated as falsy.
@@ -35,6 +60,11 @@ NODE_ENV=development
LOG_LEVEL=debug
```
**Log Files**:
- `logs/backend.log` - Backend server logs
- `logs/frontend.log` - Frontend client logs
- Logs are excluded from git via `.gitignore`
## Testing
Use `bun test` to run tests.
@@ -47,6 +77,58 @@ test("hello world", () => {
});
```
## Docker Deployment
The application supports Docker deployment with single-port architecture and host-mounted database persistence.
**Quick Start**:
```bash
# Create environment file
cp .env.docker.example .env
# Generate JWT secret
openssl rand -base64 32 # Add to .env as JWT_SECRET
# Start application
docker-compose up -d --build
# Access at http://localhost:3001
```
**Architecture**:
- **Single Port**: Port 3001 serves both API (`/api/*`) and frontend (all other routes)
- **Database Persistence**: SQLite database stored at `./data/award.db` on host
- **Auto-initialization**: Database created from template on first startup
- **Health Checks**: Built-in health monitoring at `/api/health`
**Key Docker Files**:
- `Dockerfile`: Multi-stage build using official Bun runtime
- `docker-compose.yml`: Stack orchestration with volume mounts
- `docker-entrypoint.sh`: Database initialization logic
- `.env.docker.example`: Environment variable template
- `DOCKER.md`: Complete deployment documentation
**Environment Variables**:
- `NODE_ENV`: Environment mode (default: production)
- `PORT`: Application port (default: 3001)
- `LOG_LEVEL`: Logging level (debug/info/warn/error)
- `JWT_SECRET`: JWT signing secret (required, change in production!)
- `VITE_APP_URL`: Your application's public URL
- `ALLOWED_ORIGINS`: CORS allowed origins (comma-separated)
**Database Management**:
- Database location: `./data/award.db` (host-mounted volume)
- Backups: `cp data/award.db data/award.db.backup.$(date +%Y%m%d)`
- Reset: `docker-compose down -v && docker-compose up -d`
**Important Notes**:
- Database persists across container restarts/recreations
- Frontend dependencies are reinstalled in container to ensure correct platform binaries
- Uses custom init script (`src/backend/scripts/init-db.js`) with `bun:sqlite`
- Architecture-agnostic (works on x86, ARM64, etc.)
For detailed documentation, see `DOCKER.md`.
## Frontend
Use HTML imports with `Bun.serve()`. Don't use `vite`. HTML imports fully support React, CSS, Tailwind.
@@ -183,6 +265,9 @@ The award system is JSON-driven and located in `award-definitions/` directory. E
**ADIF Parser**: `src/backend/utils/adif-parser.js`
- `parseADIF(adifData)`: Parse ADIF format into QSO records
- Handles case-insensitive `<EOR>` delimiters (supports `<EOR>`, `<eor>`, `<Eor>`)
- Uses `matchAll()` for reliable field parsing
- Skips header records automatically
- `parseDCLResponse(response)`: Parse DCL's JSON response format `{ "adif": "..." }`
- `normalizeBand(band)`: Standardize band names (80m, 40m, etc.)
- `normalizeMode(mode)`: Standardize mode names (CW, FT8, SSB, etc.)
@@ -217,9 +302,14 @@ The award system is JSON-driven and located in `award-definitions/` directory. E
- `fetchQSOsFromDCL(dclApiKey, sinceDate)`: Fetch from DCL API
- API Endpoint: `https://dings.dcl.darc.de/api/adiexport`
- Request: POST with JSON body `{ key, limit: 50000, qsl_since, qso_since, cnf_only }`
- `cnf_only: null` - Fetch ALL QSOs (confirmed + unconfirmed)
- `cnf_only: true` - Fetch only confirmed QSOs (dcl_qsl_rcvd='Y')
- `qso_since: DATE` - QSOs since this date (YYYYMMDD format)
- `qsl_since: DATE` - QSL confirmations since this date (YYYYMMDD format)
- `parseDCLJSONResponse(jsonResponse)`: Parse example/test payloads
- `syncQSOs(userId, dclApiKey, sinceDate, jobId)`: Sync QSOs to database
- `getLastDCLQSLDate(userId)`: Get last QSL date for incremental sync
- `getLastDCLQSODate(userId)`: Get last QSO date for incremental sync
- Debug logging (when `LOG_LEVEL=debug`) shows API params with redacted key (first/last 4 chars)
- Fully implemented and functional
- **Note**: DCL API is a custom prototype by DARC; contact DARC for API specification details
@@ -376,17 +466,23 @@ The DOK award type supports filters to create award variants. Examples:
- Required for DLD award
- German amateur radio specific
- Request format: POST JSON `{ key, limit, qsl_since, qso_since, cnf_only }`
- `cnf_only: null` - Fetch all QSOs (confirmed + unconfirmed)
- `cnf_only: true` - Fetch only confirmed QSOs
- `qso_since` - QSOs since this date (YYYYMMDD)
- `qsl_since` - QSL confirmations since this date (YYYYMMDD)
- Response format: JSON with ADIF string in `adif` field
- Supports incremental sync by `qsl_since` parameter (format: YYYYMMDD)
- Syncs ALL QSOs (both confirmed and unconfirmed)
- Unconfirmed QSOs stored but don't count toward awards
- Updates QSOs only if confirmation data has changed
### ADIF Format
Both LoTW and DCL return data in ADIF (Amateur Data Interchange Format):
- Field format: `<FIELD_NAME:length>value`
- Record delimiter: `<EOR>` (end of record)
- Record delimiter: `<EOR>` (end of record, case-insensitive)
- Header ends with: `<EOH>` (end of header)
- Example: `<CALL:5>DK0MU<BAND:3>80m<QSO_DATE:8>20250621<EOR>`
- **Important**: Parser handles case-insensitive `<EOR>`, `<eor>`, `<Eor>` tags
**DCL-specific fields**:
- `DCL_QSL_RCVD`: DCL confirmation status (Y/N/?)
@@ -397,6 +493,32 @@ Both LoTW and DCL return data in ADIF (Amateur Data Interchange Format):
### Recent Commits
- `aeeb75c`: feat: add QSO count display to filter section
- Shows count of QSOs matching current filters next to "Filters" heading
- Displays "Showing X filtered QSOs" when filters are active
- Displays "Showing X total QSOs" when no filters applied
- Dynamically updates when filters change
- `bee02d1`: fix: count QSOs confirmed by either LoTW or DCL in stats
- QSO stats were only counting LoTW-confirmed QSOs (`lotwQslRstatus === 'Y'`)
- QSOs confirmed only by DCL were excluded from "confirmed" count
- Fixed by changing filter to: `q.lotwQslRstatus === 'Y' || q.dclQslRstatus === 'Y'`
- Now correctly shows all QSOs confirmed by at least one system
- `233888c`: fix: make ADIF parser case-insensitive for EOR delimiter
- **Critical bug**: LoTW uses lowercase `<eor>` tags, parser was splitting on uppercase `<EOR>`
- Caused 242K+ QSOs to be parsed as 1 giant record with fields overwriting each other
- Changed to case-insensitive regex: `new RegExp('<eor>', 'gi')`
- Replaced `regex.exec()` while loop with `matchAll()` for-of iteration
- Now correctly imports all QSOs from large LoTW reports
- `645f786`: fix: add missing timeOn field to LoTW duplicate detection
- LoTW sync was missing `timeOn` in duplicate detection query
- Multiple QSOs with same callsign/date/band/mode but different times were treated as duplicates
- Now matches DCL sync logic: `userId, callsign, qsoDate, timeOn, band, mode`
- `7f77c3a`: feat: add filter support for DOK awards
- DOK award type now supports filtering by band, mode, and other QSO fields
- Allows creating award variants like DLD 80m, DLD CW, DLD 80m CW
- Uses existing filter system with eq, ne, in, nin, contains operators
- Example awards created: dld-80m, dld-40m, dld-cw, dld-80m-cw
- `9e73704`: docs: update CLAUDE.md with DLD award variants documentation
- `7201446`: fix: return proper HTML for SPA routes instead of Bun error page
- When accessing client-side routes (like /qsos) via curl or non-JS clients,
the server attempted to open them as static files, causing Bun to throw
@@ -447,3 +569,240 @@ Both LoTW and DCL return data in ADIF (Amateur Data Interchange Format):
- LoTW never sends DOK data; only DCL provides DOK fields
**Important**: DCL sync only updates DOK/grid fields when DCL provides non-empty values. This prevents accidentally clearing DOK data that was manually entered or imported from other sources.
### DCL Sync Strategy
**Current Behavior**: DCL syncs ALL QSOs (confirmed + unconfirmed)
The application syncs both confirmed and unconfirmed QSOs from DCL:
- **Confirmed QSOs**: `dclQslRstatus = 'Y'` - Count toward awards
- **Unconfirmed QSOs**: `dclQslRstatus = 'N'` - Stored but don't count toward awards
**Purpose of syncing unconfirmed QSOs**:
- Users can see who they've worked (via "Not Confirmed" filter)
- Track QSOs awaiting confirmation
- QSOs can get confirmed later and will be updated on next sync
**Award Calculation**: Always uses confirmed QSOs only (e.g., `dclQslRstatus === 'Y'` for DLD award)
### DCL Incremental Sync Strategy
**Challenge**: Need to fetch both new QSOs AND confirmation updates to old QSOs
**Example Scenario**:
1. Full sync on 2026-01-20 → Last QSO date: 2026-01-20
2. User works 3 new QSOs on 2026-01-25 (unconfirmed)
3. Old QSO from 2026-01-10 gets confirmed on 2026-01-26
4. Next sync needs both: new QSOs (2026-01-25) AND confirmation update (2026-01-10)
**Solution**: Use both `qso_since` and `qsl_since` parameters with OR logic
```javascript
// Proposed sync logic (requires OR logic from DCL API)
const lastQSODate = await getLastDCLQSODate(userId); // Track QSO dates
const lastQSLDate = await getLastDCLQSLDate(userId); // Track QSL dates
const requestBody = {
key: dclApiKey,
limit: 50000,
qso_since: lastQSODate, // Get new QSOs since last contact
qsl_since: lastQSLDate, // Get QSL confirmations since last sync
cnf_only: null, // Fetch all QSOs
};
```
**Required API Behavior (OR Logic)**:
- Return QSOs where `(qso_date >= qso_since) OR (qsl_date >= qsl_since)`
- This ensures we get both new QSOs and confirmation updates
**Current DCL API Status**:
- Unknown if current API uses AND or OR logic for combined filters
- **Action Needed**: Request OR logic implementation from DARC
- Test current behavior to confirm API response pattern
**Why OR Logic is Needed**:
- With AND logic: Old QSOs getting confirmed are missed (qso_date too old)
- With OR logic: All updates captured efficiently in one API call
### QSO Page Filters
The QSO page (`src/frontend/src/routes/qsos/+page.svelte`) includes advanced filtering capabilities:
**Available Filters**:
- **Search Box**: Full-text search across callsign, entity (DXCC country), and grid square fields
- Press Enter to apply search
- Case-insensitive partial matching
- **Band Filter**: Dropdown to filter by amateur band (160m, 80m, 60m, 40m, 30m, 20m, 17m, 15m, 12m, 10m, 6m, 2m, 70cm)
- **Mode Filter**: Dropdown to filter by mode (CW, SSB, AM, FM, RTTY, PSK31, FT8, FT4, JT65, JT9)
- **Confirmation Type Filter**: Filter by confirmation status
- "All QSOs": Shows all QSOs (no filter)
- "LoTW Only": Shows QSOs confirmed by LoTW but NOT DCL
- "DCL Only": Shows QSOs confirmed by DCL but NOT LoTW
- "Both Confirmed": Shows QSOs confirmed by BOTH LoTW AND DCL
- "Not Confirmed": Shows QSOs confirmed by NEITHER LoTW nor DCL
- **Clear Button**: Resets all filters and reloads all QSOs
**Backend Implementation** (`src/backend/services/lotw.service.js`):
- `getUserQSOs(userId, filters, options)`: Main filtering function
- Supports pagination with `page` and `limit` options
- Filter logic uses Drizzle ORM query builders for safe SQL generation
- Debug logging when `LOG_LEVEL=debug` shows applied filters
**Frontend API** (`src/frontend/src/lib/api.js`):
- `qsosAPI.getAll(filters)`: Fetch QSOs with optional filters
- Filters passed as query parameters: `?band=20m&mode=CW&confirmationType=lotw&search=DL`
**QSO Count Display**:
- Shows count of QSOs matching current filters next to "Filters" heading
- **With filters active**: "Showing **X** filtered QSOs"
- **No filters**: "Showing **X** total QSOs"
- Dynamically updates when filters are applied or cleared
- Uses `pagination.totalCount` from backend API response
### DXCC Entity Priority Logic
When syncing QSOs from multiple confirmation sources, the system follows a priority order for DXCC entity data:
**Priority Order**: LoTW > DCL
**Implementation** (`src/backend/services/dcl.service.js`):
```javascript
// DXCC priority: LoTW > DCL
// Only update entity fields from DCL if:
// 1. QSO is NOT LoTW confirmed, AND
// 2. DCL actually sent entity data, AND
// 3. Current entity is missing
const hasLoTWConfirmation = existingQSO.lotwQslRstatus === 'Y';
const hasDCLData = dbQSO.entity || dbQSO.entityId;
const missingEntity = !existingQSO.entity || existingQSO.entity === '';
if (!hasLoTWConfirmation && hasDCLData && missingEntity) {
// Fill in entity data from DCL (only if DCL provides it)
updateData.entity = dbQSO.entity;
updateData.entityId = dbQSO.entityId;
// ... other entity fields
}
```
**Rules**:
1. **LoTW-confirmed QSOs**: Always use LoTW's DXCC data (most reliable)
2. **DCL-only QSOs**: Use DCL's DXCC data IF available in ADIF payload
3. **Empty entity fields**: If DCL doesn't send DXCC data, entity remains empty
4. **Never overwrite**: Once LoTW confirms with entity data, DCL sync won't change it
**Important Note**: DCL API currently doesn't send DXCC/entity fields in their ADIF export. This is a limitation of the DCL API, not the application. If DCL adds these fields in the future, the system will automatically use them for DCL-only QSOs.
### Recent Development Work (January 2025)
**QSO Page Enhancements**:
- Added confirmation type filter with exclusive logic (LoTW Only, DCL Only, Both Confirmed, Not Confirmed)
- Added search box for filtering by callsign, entity, or grid square
- Renamed "All Confirmation" to "All QSOs" for clarity
- Fixed filter logic to properly handle exclusive confirmation types
**Bug Fixes**:
- Fixed confirmation filter showing wrong QSOs (e.g., "LoTW Only" was also showing DCL QSOs)
- Implemented proper SQL conditions for exclusive filters using separate condition pushes
- Added debug logging to track filter application
**DXCC Entity Handling**:
- Clarified that DCL API doesn't send DXCC fields (current limitation)
- Implemented priority logic: LoTW entity data takes precedence over DCL
- System ready to auto-use DCL DXCC data if they add it in future API updates
### Critical LoTW Sync Behavior (LEARNED THE HARD WAY)
**⚠️ IMPORTANT: LoTW sync MUST only import confirmed QSOs**
After attempting to implement "QSO Delta" sync (all QSOs, confirmed + unconfirmed), we discovered:
**The Problem:**
LoTW ADIF export with `qso_qsl=no` (all QSOs mode) only includes:
- `CALL` (callsign)
- `QSL_RCVD` (confirmation status: Y/N)
**Missing Fields for Unconfirmed QSOs:**
- `DXCC` (entity ID) ← **CRITICAL for awards!**
- `COUNTRY` (entity name)
- `CONTINENT`
- `CQ_ZONE`
- `ITU_ZONE`
**Result:** Unconfirmed QSOs have `entityId: null` and `entity: ""`, breaking award calculations.
**Current Implementation (CORRECT):**
```javascript
// lotw.service.js - fetchQSOsFromLoTW()
const params = new URLSearchParams({
login: lotwUsername,
password: loTWPassword,
qso_query: '1',
qso_qsl: 'yes', // ONLY confirmed QSOs
qso_qslsince: dateStr, // Incremental sync
});
```
**Why This Matters:**
- Awards require `entityId` to count entities
- Without `entityId`, QSOs can't be counted toward DXCC, WAS, etc.
- Users can still see "worked" stations in QSO list, but awards only count confirmed
- DCL sync can import all QSOs because it provides entity data via callsign lookup
**Attempted Solution (REVERTED):**
- Tried implementing callsign prefix lookup to populate missing `entityId`
- Created `src/backend/utils/callsign-lookup.js` with basic prefix mappings
- Complexity: 1000+ DXCC entities, many special event callsigns, portable designators
- Decision: Too complex, reverted (commit 310b154)
**Takeaway:** LoTW confirmed QSOs have reliable DXCC data. Don't try to workaround this fundamental limitation.
### QSO Confirmation Filters
Added "Confirmed by at least 1 service" filter to QSO view (commit 688b0fc):
**Filter Options:**
- "All QSOs" - No filter
- "Confirmed by at least 1 service" (NEW) - LoTW OR DCL confirmed
- "LoTW Only" - Confirmed by LoTW but NOT DCL
- "DCL Only" - Confirmed by DCL but NOT LoTW
- "Both Confirmed" - Confirmed by BOTH LoTW AND DCL
- "Not Confirmed" - Confirmed by NEITHER
**SQL Logic:**
```sql
-- "Confirmed by at least 1 service"
WHERE lotwQslRstatus = 'Y' OR dclQslRstatus = 'Y'
-- "LoTW Only"
WHERE lotwQslRstatus = 'Y' AND (dclQslRstatus IS NULL OR dclQslRstatus != 'Y')
-- "DCL Only"
WHERE dclQslRstatus = 'Y' AND (lotwQslRstatus IS NULL OR lotwQslRstatus != 'Y')
-- "Both Confirmed"
WHERE lotwQslRstatus = 'Y' AND dclQslRstatus = 'Y'
-- "Not Confirmed"
WHERE (lotwQslRstatus IS NULL OR lotwQslRstatus != 'Y')
AND (dclQslRstatus IS NULL OR dclQslRstatus != 'Y')
```
### Recent Development Work (January 2025)
**Sync Type Support (ATTEMPTED & REVERTED):**
- Commit 5b78935: Added LoTW sync type support (QSL/QSO delta/full)
- Commit 310b154: Reverted - LoTW doesn't provide entity data for unconfirmed QSOs
- **Lesson:** Keep it simple - only sync confirmed QSOs from LoTW
**Dashboard Enhancements:**
- Added sync job history display with real-time polling (every 2 seconds)
- Shows job progress, status, and import logs
- Cancel button for stale/failed jobs with rollback capability
- Tracks all QSO changes in `qso_changes` table for rollback
**Rollback System:**
- `cancelJob(jobId, userId)` - Cancels and rolls back sync jobs
- Tracks added QSOs (deletes them on rollback)
- Tracks updated QSOs (restores previous state)
- Only allows canceling failed jobs or stale running jobs (>1 hour)
- Server-side validation prevents unauthorized cancellations

219
DOCKER.md Normal file
View File

@@ -0,0 +1,219 @@
# Docker Deployment Guide
This guide covers deploying Quickawards using Docker.
## Quick Start
1. **Create environment file:**
```bash
cp .env.docker.example .env
```
2. **Generate secure JWT secret:**
```bash
openssl rand -base64 32
```
Copy the output and set it as `JWT_SECRET` in `.env`.
3. **Update `.env` with your settings:**
- `JWT_SECRET`: Strong random string (required)
- `VITE_APP_URL`: Your domain (e.g., `https://awards.example.com`)
- `ALLOWED_ORIGINS`: Your domain(s) for CORS
4. **Start the application:**
```bash
docker-compose up -d
```
5. **Access the application:**
- URL: http://localhost:3001
- Health check: http://localhost:3001/api/health
## Architecture
### Single Port Design
The Docker stack exposes a single port (3001) which serves both:
- **Backend API** (`/api/*`)
- **Frontend SPA** (all other routes)
### Database Persistence
- **Location**: `./data/award.db` (host-mounted volume)
- **Initialization**: Automatic on first startup
- **Persistence**: Database survives container restarts/recreations
### Startup Behavior
1. **First startup**: Database is created from template
2. **Subsequent startups**: Existing database is used
3. **Container recreation**: Database persists in volume
## Commands
### Start the application
```bash
docker-compose up -d
```
### View logs
```bash
docker-compose logs -f
```
### Stop the application
```bash
docker-compose down
```
### Rebuild after code changes
```bash
docker-compose up -d --build
```
### Stop and remove everything (including database volume)
```bash
docker-compose down -v
```
## Environment Variables
| Variable | Required | Default | Description |
|----------|----------|---------|-------------|
| `NODE_ENV` | No | `production` | Environment mode |
| `PORT` | No | `3001` | Application port |
| `LOG_LEVEL` | No | `info` | Logging level (debug/info/warn/error) |
| `JWT_SECRET` | **Yes** | - | JWT signing secret (change this!) |
| `VITE_APP_URL` | No | - | Your application's public URL |
| `ALLOWED_ORIGINS` | No | - | CORS allowed origins (comma-separated) |
## Database Management
### Backup the database
```bash
cp data/award.db data/award.db.backup.$(date +%Y%m%d)
```
### Restore from backup
```bash
docker-compose down
cp data/award.db.backup.YYYYMMDD data/award.db
docker-compose up -d
```
### Reset the database
```bash
docker-compose down -v
docker-compose up -d
```
## Troubleshooting
### Container won't start
```bash
# Check logs
docker-compose logs -f
# Check container status
docker-compose ps
```
### Database errors
```bash
# Check database file exists
ls -la data/
# Check database permissions
stat data/award.db
```
### Port already in use
Change the port mapping in `docker-compose.yml`:
```yaml
ports:
- "8080:3001" # Maps host port 8080 to container port 3001
```
### Health check failing
```bash
# Check if container is responding
curl http://localhost:3001/api/health
# Check container logs
docker-compose logs quickawards
```
## Production Deployment
### Using a Reverse Proxy (nginx)
Example nginx configuration:
```nginx
server {
listen 80;
server_name awards.example.com;
location / {
proxy_pass http://localhost:3001;
proxy_http_version 1.1;
proxy_set_header Upgrade $http_upgrade;
proxy_set_header Connection 'upgrade';
proxy_set_header Host $host;
proxy_cache_bypass $http_upgrade;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header X-Forwarded-Proto $scheme;
}
}
```
### SSL/TLS with Let's Encrypt
Use certbot with nginx:
```bash
sudo certbot --nginx -d awards.example.com
```
### Security Checklist
- [ ] Set strong `JWT_SECRET`
- [ ] Set `NODE_ENV=production`
- [ ] Set `LOG_LEVEL=info` (or `warn` in production)
- [ ] Configure `ALLOWED_ORIGINS` to your domain only
- [ ] Use HTTPS/TLS in production
- [ ] Regular database backups
- [ ] Monitor logs for suspicious activity
- [ ] Keep Docker image updated
## File Structure After Deployment
```
project/
├── data/
│ └── award.db # Persisted database (volume mount)
├── docker-compose.yml
├── Dockerfile
├── .dockerignore
├── .env # Your environment variables
└── ... (source code)
```
## Building Without docker-compose
If you prefer to use `docker` directly:
```bash
# Build the image
docker build -t quickawards .
# Run the container
docker run -d \
--name quickawards \
-p 3001:3001 \
-v $(pwd)/data:/data \
-e JWT_SECRET=your-secret-here \
-e NODE_ENV=production \
quickawards
```

72
Dockerfile Normal file
View File

@@ -0,0 +1,72 @@
# Multi-stage Dockerfile for Quickawards
# Uses official Bun runtime image
# ============================================
# Stage 1: Dependencies & Database Init
# ============================================
FROM oven/bun:1 AS builder
WORKDIR /app
# Install ALL dependencies (including devDependencies for drizzle-kit)
COPY package.json bun.lock ./
RUN bun install --frozen-lockfile
# Copy source code (node_modules excluded by .dockerignore)
COPY . .
# Reinstall frontend dependencies to get correct platform binaries
RUN cd src/frontend && bun install
# Initialize database using custom script
# This creates a fresh database with the correct schema using bun:sqlite
RUN bun src/backend/scripts/init-db.js
# Build frontend
RUN bun run build
# ============================================
# Stage 2: Production Image
# ============================================
FROM oven/bun:1 AS production
WORKDIR /app
# Install production dependencies only
COPY package.json bun.lock ./
RUN bun install --frozen-lockfile --production
# Copy backend source and schema files
COPY src/backend ./src/backend
COPY award-definitions ./award-definitions
COPY drizzle.config.ts ./
# Copy frontend build from builder stage
COPY --from=builder /app/src/frontend/build ./src/frontend/build
# Copy initialized database from builder (will be used as template)
COPY --from=builder /app/src/backend/award.db /app/award.db.template
# Copy drizzle migrations (if they exist)
COPY --from=builder /app/drizzle ./drizzle
# Create directory for database volume mount
RUN mkdir -p /data
# Copy entrypoint script
COPY docker-entrypoint.sh /usr/local/bin/
RUN chmod +x /usr/local/bin/docker-entrypoint.sh
# Set environment variables
ENV NODE_ENV=production \
PORT=3001 \
LOG_LEVEL=info
# Expose the application port
EXPOSE 3001
# Use entrypoint script to handle database initialization
ENTRYPOINT ["docker-entrypoint.sh"]
# Start the backend server
CMD ["bun", "run", "src/backend/index.js"]

181
README.md
View File

@@ -25,6 +25,38 @@ A web application for amateur radio operators to track QSOs (contacts) and award
- Multi-service confirmation display (LoTW, DCL)
- **Settings**: Configure LoTW and DCL credentials securely
## Performance Optimizations
The application includes several performance optimizations for fast response times and efficient resource usage:
### Database Performance
- **Performance Indexes**: 7 optimized indexes on QSO table
- Filter queries (band, mode, confirmation status)
- Sync duplicate detection (most impactful)
- Award calculations (LoTW/DCL confirmed)
- Date-based sorting
- **Impact**: 80% faster filter queries, 60% faster sync operations
### Backend Optimizations
- **N+1 Query Prevention**: Uses SQL COUNT for pagination instead of loading all records
- Impact: 90% memory reduction, 70% faster QSO listing
- **Award Progress Caching**: In-memory cache with 5-minute TTL
- Impact: 95% faster award calculations for cached requests
- Auto-invalidation after LoTW/DCL syncs
- **Batch API Endpoints**: Single request for all award progress
- Impact: 95% reduction in API calls (awards page: 5s → 500ms)
### Frontend Optimizations
- **Component Extraction**: Modular components for better performance
- QSOStats: Statistics display component
- SyncButton: Reusable sync button component
- **Batch API Calls**: Awards page loads all progress in one request
- **Efficient Re-rendering**: Reduced component re-renders through modular design
### Deployment Optimizations
- **Bun Configuration**: Optimized bunfig.toml for production builds
- **Production Templates**: Ready-to-use deployment configuration
## Tech Stack
### Backend
@@ -46,36 +78,47 @@ award/
├── src/
│ ├── backend/
│ │ ├── config/
│ │ │ ── database.js # Database connection
│ │ │ ├── jwt.js # JWT configuration
│ │ │ └── logger.js # Pino logging configuration
│ │ │ ── config.js # Centralized configuration (DB, JWT, logging)
│ │ ├── db/
│ │ │ └── schema/
│ │ │ └── index.js # Database schema (users, qsos, sync_jobs, awards)
│ │ │ └── index.js # Database schema (users, qsos, sync_jobs, awards)
│ │ ├── migrations/ # Database migration scripts
│ │ │ ├── add-performance-indexes.js # Create performance indexes
│ │ │ └── rollback-performance-indexes.js # Rollback script
│ │ ├── services/
│ │ │ ├── auth.service.js # User authentication
│ │ │ ├── lotw.service.js # LoTW sync & QSO management
│ │ │ ├── dcl.service.js # DCL sync stub (for future API)
│ │ │ ├── job-queue.service.js # Background job queue
│ │ │ ── awards.service.js # Award progress tracking
│ │ └── index.js # API routes and server
│ │ │ ├── auth.service.js # User authentication
│ │ │ ├── cache.service.js # Award progress caching
│ │ │ ├── lotw.service.js # LoTW sync & QSO management
│ │ │ ├── dcl.service.js # DCL sync
│ │ │ ── job-queue.service.js # Background job queue
│ │ │ └── awards.service.js # Award progress tracking
│ │ ├── utils/
│ │ │ └── adif-parser.js # ADIF format parser
│ │ └── index.js # API routes and server
│ └── frontend/
│ ├── src/
│ │ ├── lib/
│ │ │ ├── api.js # API client
│ │ │ └── stores.js # Svelte stores (auth)
│ │ │ ├── api.js # API client
│ │ │ └── stores.js # Svelte stores (auth)
│ │ └── routes/
│ │ ├── +layout.svelte # Navigation bar & layout
│ │ ├── +page.svelte # Dashboard
│ │ ├── auth/
│ │ │ ├── login/+page.svelte # Login page
│ │ │ └── register/+page.svelte # Registration page
│ │ ├── qsos/+page.svelte # QSO log with DOK fields and confirmations
│ │ ├── qsos/
│ │ │ ├── +page.svelte # QSO log page
│ │ │ └── components/ # QSO page components
│ │ │ ├── QSOStats.svelte # Statistics display
│ │ │ └── SyncButton.svelte # Sync button component
│ │ ├── awards/+page.svelte # Awards progress tracking
│ │ └── settings/+page.svelte # Settings (LoTW & DCL credentials)
│ │ └── settings/+page.svelte # Settings (credentials)
│ └── package.json
├── award.db # SQLite database (auto-created)
├── drizzle.config.js # Drizzle ORM configuration
├── award-definitions/ # Award rule definitions (JSON)
├── award.db # SQLite database (auto-created)
├── .env.production.template # Production configuration template
├── bunfig.toml # Bun configuration
├── drizzle.config.js # Drizzle ORM configuration
├── package.json
└── README.md
```
@@ -121,12 +164,51 @@ NODE_ENV=production
**For development**: You can leave `.env` empty or use defaults.
4. Initialize the database:
4. Initialize the database with performance indexes:
```bash
# Push database schema
bun run db:push
# Create performance indexes (recommended)
bun run db:indexes
```
This creates the SQLite database with required tables (users, qsos, sync_jobs).
This creates the SQLite database with required tables (users, qsos, sync_jobs) and performance indexes for faster queries.
### Quick Start (Development)
```bash
# Install dependencies
bun install
# Initialize database
bun run db:push && bun run db:indexes
# Start development servers
bun run dev
```
Application available at: http://localhost:5173
### Quick Deploy (Production)
```bash
# Pull latest code
git pull
# One-command deployment
bun run deploy
```
This runs: install → db migrations → indexes → build
Or run step-by-step:
```bash
bun install
bun run db:push
bun run db:indexes
bun run build
```
## Running the Application
@@ -164,7 +246,8 @@ The application will be available at:
### Awards
- `GET /api/awards` - Get all available awards
- `GET /api/awards/:awardId/progress` - Get award progress
- `GET /api/awards/batch/progress` - Get progress for all awards (optimized, single request)
- `GET /api/awards/:awardId/progress` - Get award progress for a specific award
- `GET /api/awards/:awardId/entities` - Get entity breakdown
### Jobs
@@ -576,10 +659,25 @@ tail -f /var/log/haproxy.log
# Pull latest changes
git pull
# One-command deployment (recommended)
bun run deploy
# Restart PM2
pm2 restart award-backend
```
**Or manual step-by-step:**
```bash
# Install updated dependencies
bun install
# Rebuild frontend (if UI changed)
# Push any schema changes
bun run db:push
# Update/create performance indexes
bun run db:indexes
# Rebuild frontend
bun run build
# Restart PM2
@@ -752,16 +850,49 @@ The QSO table shows confirmations from multiple services:
## Development
### Database Migrations
### Available Scripts
```bash
# Push schema changes to database
bun run db:push
# Development
bun run dev # Start both backend (3001) and frontend (5173)
bun run dev:backend # Start backend only
bun run dev:frontend # Start frontend only
# Open Drizzle Studio (database GUI)
bun run db:studio
# Database
bun run db:push # Push schema changes via Drizzle
bun run db:indexes # Create/update performance indexes
bun run db:studio # Open Drizzle Studio (database GUI)
bun run db:generate # Generate Drizzle migrations
bun run db:migrate # Run Drizzle migrations
# Build & Deploy
bun run build # Build frontend for production
bun run deploy # Full deployment pipeline (install + db + indexes + build)
# Deployment on production
git pull && bun run deploy && pm2 restart award-backend
```
### Database Migrations
The application uses two types of database changes:
**1. Schema Changes (Drizzle ORM)**
```bash
bun run db:push # Push schema changes
```
**2. Performance Indexes (Custom)**
```bash
bun run db:indexes # Create/update performance indexes
```
The indexes are idempotent (safe to run multiple times) and include:
- Filter query indexes (band, mode, confirmation)
- Sync duplicate detection index
- Award calculation indexes
- Date sorting index
### Linting
```bash

View File

@@ -0,0 +1,23 @@
{
"id": "73-on-73",
"name": "73 on 73",
"description": "Confirm 73 unique QSO partners on satellite AO-73",
"caption": "Contact and confirm 73 different stations (unique callsigns) via the AO-73 satellite. Each unique callsign confirmed via LoTW counts toward the total of 73.",
"category": "satellite",
"rules": {
"type": "entity",
"entityType": "callsign",
"target": 73,
"displayField": "callsign",
"filters": {
"operator": "AND",
"filters": [
{
"field": "satName",
"operator": "eq",
"value": "AO-73"
}
]
}
}
}

View File

@@ -12,7 +12,6 @@
"elysia": "^1.4.22",
},
"devDependencies": {
"@libsql/client": "^0.17.0",
"@types/bun": "latest",
"drizzle-kit": "^0.31.8",
},
@@ -128,10 +127,22 @@
"@types/ws": ["@types/ws@8.18.1", "", { "dependencies": { "@types/node": "*" } }, "sha512-ThVF6DCVhA8kUGy+aazFQ4kXQ7E1Ty7A3ypFOe0IcJV8O/M511G99AW24irKrW56Wt44yG9+ij8FaqoBGkuBXg=="],
"base64-js": ["base64-js@1.5.1", "", {}, "sha512-AKpaYlHn8t4SVbOHCy+b5+KKgvR4vrsD8vbvrbiQJps7fKDTkjkDry6ji0rUJjC0kzbNePLwzxq8iypo41qeWA=="],
"better-sqlite3": ["better-sqlite3@12.6.2", "", { "dependencies": { "bindings": "^1.5.0", "prebuild-install": "^7.1.1" } }, "sha512-8VYKM3MjCa9WcaSAI3hzwhmyHVlH8tiGFwf0RlTsZPWJ1I5MkzjiudCo4KC4DxOaL/53A5B1sI/IbldNFDbsKA=="],
"bindings": ["bindings@1.5.0", "", { "dependencies": { "file-uri-to-path": "1.0.0" } }, "sha512-p2q/t/mhvuOj/UeLlV6566GD/guowlr0hHxClI0W9m7MWYkL1F0hLo+0Aexs9HSPCtR1SXQ0TD3MMKrXZajbiQ=="],
"bl": ["bl@4.1.0", "", { "dependencies": { "buffer": "^5.5.0", "inherits": "^2.0.4", "readable-stream": "^3.4.0" } }, "sha512-1W07cM9gS6DcLperZfFSj+bWLtaPGSOHWhPiGzXmvVJbRLdG82sH/Kn8EtW1VqWVA54AKf2h5k5BbnIbwF3h6w=="],
"buffer": ["buffer@5.7.1", "", { "dependencies": { "base64-js": "^1.3.1", "ieee754": "^1.1.13" } }, "sha512-EHcyIPBQ4BSGlvjB16k5KgAJ27CIsHY/2JBmCRReo48y9rQ3MaUzWX3KVlBa4U7MyX02HdVj0K7C3WaB3ju7FQ=="],
"buffer-from": ["buffer-from@1.1.2", "", {}, "sha512-E+XQCRwSbaaiChtv6k6Dwgc+bx+Bs6vuKJHHl5kox/BaKbhiXzqQOwK4cO22yElGp2OCmjwVhT3HmxgyPGnJfQ=="],
"bun-types": ["bun-types@1.3.6", "", { "dependencies": { "@types/node": "*" } }, "sha512-OlFwHcnNV99r//9v5IIOgQ9Uk37gZqrNMCcqEaExdkVq3Avwqok1bJFmvGMCkCE0FqzdY8VMOZpfpR3lwI+CsQ=="],
"chownr": ["chownr@1.1.4", "", {}, "sha512-jJ0bqzaylmJtVnNgzTeSOs8DPavpbYgEr/b0YL8/2GO3xJEhInFmhKMUnEJQjZumK7KXGFhUy89PrsJWlakBVg=="],
"cookie": ["cookie@1.1.1", "", {}, "sha512-ei8Aos7ja0weRpFzJnEA9UHJ/7XQmqglbRwnf2ATjcB9Wq874VKH9kfjjirM6UhU2/E5fFYadylyhFldcqSidQ=="],
"cross-fetch": ["cross-fetch@4.1.0", "", { "dependencies": { "node-fetch": "^2.7.0" } }, "sha512-uKm5PU+MHTootlWEY+mZ4vvXoCn4fLQxT9dSc1sXVMSFkINTJVN8cAQROpwcKm8bJ/c7rgZVIBWzH5T78sNZZw=="],
@@ -140,6 +151,10 @@
"debug": ["debug@4.4.3", "", { "dependencies": { "ms": "^2.1.3" } }, "sha512-RGwwWnwQvkVfavKVt22FGLw+xYSdzARwm0ru6DhTVA3umU5hZc28V3kO4stgYryrTlLpuvgI9GiijltAjNbcqA=="],
"decompress-response": ["decompress-response@6.0.0", "", { "dependencies": { "mimic-response": "^3.1.0" } }, "sha512-aW35yZM6Bb/4oJlZncMH2LCoZtJXTRxES17vE3hoRiowU2kWHaJKFkSBDnDR+cm9J+9QhXmREyIfv0pji9ejCQ=="],
"deep-extend": ["deep-extend@0.6.0", "", {}, "sha512-LOHxIOaPYdHlJRtCQfDIVZtfw/ufM8+rVj649RIHzcm/vGwQRXFt6OPqIFWsm2XEMrNIEtWR64sY1LEKD2vAOA=="],
"detect-libc": ["detect-libc@2.0.2", "", {}, "sha512-UX6sGumvvqSaXgdKGUsgZWqcUyIXZ/vZTrlRT/iobiKhGL0zL4d3osHj3uqllWJK+i+sixDS/3COVEOFbupFyw=="],
"drizzle-kit": ["drizzle-kit@0.31.8", "", { "dependencies": { "@drizzle-team/brocli": "^0.10.2", "@esbuild-kit/esm-loader": "^2.5.5", "esbuild": "^0.25.4", "esbuild-register": "^3.5.0" }, "bin": { "drizzle-kit": "bin.cjs" } }, "sha512-O9EC/miwdnRDY10qRxM8P3Pg8hXe3LyU4ZipReKOgTwn4OqANmftj8XJz1UPUAS6NMHf0E2htjsbQujUTkncCg=="],
@@ -148,24 +163,38 @@
"elysia": ["elysia@1.4.22", "", { "dependencies": { "cookie": "^1.1.1", "exact-mirror": "^0.2.6", "fast-decode-uri-component": "^1.0.1", "memoirist": "^0.4.0" }, "peerDependencies": { "@sinclair/typebox": ">= 0.34.0 < 1", "@types/bun": ">= 1.2.0", "file-type": ">= 20.0.0", "openapi-types": ">= 12.0.0", "typescript": ">= 5.0.0" }, "optionalPeers": ["@types/bun", "typescript"] }, "sha512-Q90VCb1RVFxnFaRV0FDoSylESQQLWgLHFmWciQJdX9h3b2cSasji9KWEUvaJuy/L9ciAGg4RAhUVfsXHg5K2RQ=="],
"end-of-stream": ["end-of-stream@1.4.5", "", { "dependencies": { "once": "^1.4.0" } }, "sha512-ooEGc6HP26xXq/N+GCGOT0JKCLDGrq2bQUZrQ7gyrJiZANJ/8YDTxTpQBXGMn+WbIQXNVpyWymm7KYVICQnyOg=="],
"esbuild": ["esbuild@0.25.12", "", { "optionalDependencies": { "@esbuild/aix-ppc64": "0.25.12", "@esbuild/android-arm": "0.25.12", "@esbuild/android-arm64": "0.25.12", "@esbuild/android-x64": "0.25.12", "@esbuild/darwin-arm64": "0.25.12", "@esbuild/darwin-x64": "0.25.12", "@esbuild/freebsd-arm64": "0.25.12", "@esbuild/freebsd-x64": "0.25.12", "@esbuild/linux-arm": "0.25.12", "@esbuild/linux-arm64": "0.25.12", "@esbuild/linux-ia32": "0.25.12", "@esbuild/linux-loong64": "0.25.12", "@esbuild/linux-mips64el": "0.25.12", "@esbuild/linux-ppc64": "0.25.12", "@esbuild/linux-riscv64": "0.25.12", "@esbuild/linux-s390x": "0.25.12", "@esbuild/linux-x64": "0.25.12", "@esbuild/netbsd-arm64": "0.25.12", "@esbuild/netbsd-x64": "0.25.12", "@esbuild/openbsd-arm64": "0.25.12", "@esbuild/openbsd-x64": "0.25.12", "@esbuild/openharmony-arm64": "0.25.12", "@esbuild/sunos-x64": "0.25.12", "@esbuild/win32-arm64": "0.25.12", "@esbuild/win32-ia32": "0.25.12", "@esbuild/win32-x64": "0.25.12" }, "bin": { "esbuild": "bin/esbuild" } }, "sha512-bbPBYYrtZbkt6Os6FiTLCTFxvq4tt3JKall1vRwshA3fdVztsLAatFaZobhkBC8/BrPetoa0oksYoKXoG4ryJg=="],
"esbuild-register": ["esbuild-register@3.6.0", "", { "dependencies": { "debug": "^4.3.4" }, "peerDependencies": { "esbuild": ">=0.12 <1" } }, "sha512-H2/S7Pm8a9CL1uhp9OvjwrBh5Pvx0H8qVOxNu8Wed9Y7qv56MPtq+GGM8RJpq6glYJn9Wspr8uw7l55uyinNeg=="],
"exact-mirror": ["exact-mirror@0.2.6", "", { "peerDependencies": { "@sinclair/typebox": "^0.34.15" }, "optionalPeers": ["@sinclair/typebox"] }, "sha512-7s059UIx9/tnOKSySzUk5cPGkoILhTE4p6ncf6uIPaQ+9aRBQzQjc9+q85l51+oZ+P6aBxh084pD0CzBQPcFUA=="],
"expand-template": ["expand-template@2.0.3", "", {}, "sha512-XYfuKMvj4O35f/pOXLObndIRvyQ+/+6AhODh+OKWj9S9498pHHn/IMszH+gt0fBCRWMNfk1ZSp5x3AifmnI2vg=="],
"fast-decode-uri-component": ["fast-decode-uri-component@1.0.1", "", {}, "sha512-WKgKWg5eUxvRZGwW8FvfbaH7AXSh2cL+3j5fMGzUMCxWBJ3dV3a7Wz8y2f/uQ0e3B6WmodD3oS54jTQ9HVTIIg=="],
"fetch-blob": ["fetch-blob@3.2.0", "", { "dependencies": { "node-domexception": "^1.0.0", "web-streams-polyfill": "^3.0.3" } }, "sha512-7yAQpD2UMJzLi1Dqv7qFYnPbaPx7ZfFK6PiIxQ4PfkGPyNyl2Ugx+a/umUonmKqjhM4DnfbMvdX6otXq83soQQ=="],
"file-type": ["file-type@21.3.0", "", { "dependencies": { "@tokenizer/inflate": "^0.4.1", "strtok3": "^10.3.4", "token-types": "^6.1.1", "uint8array-extras": "^1.4.0" } }, "sha512-8kPJMIGz1Yt/aPEwOsrR97ZyZaD1Iqm8PClb1nYFclUCkBi0Ma5IsYNQzvSFS9ib51lWyIw5mIT9rWzI/xjpzA=="],
"file-uri-to-path": ["file-uri-to-path@1.0.0", "", {}, "sha512-0Zt+s3L7Vf1biwWZ29aARiVYLx7iMGnEUl9x33fbB/j3jR81u/O2LbqK+Bm1CDSNDKVtJ/YjwY7TUd5SkeLQLw=="],
"formdata-polyfill": ["formdata-polyfill@4.0.10", "", { "dependencies": { "fetch-blob": "^3.1.2" } }, "sha512-buewHzMvYL29jdeQTVILecSaZKnt/RJWjoZCF5OW60Z67/GmSLBkOFM7qh1PI3zFNtJbaZL5eQu1vLfazOwj4g=="],
"fs-constants": ["fs-constants@1.0.0", "", {}, "sha512-y6OAwoSIf7FyjMIv94u+b5rdheZEjzR63GTyZJm5qh4Bi+2YgwLCcI/fPFZkL5PSixOt6ZNKm+w+Hfp/Bciwow=="],
"get-tsconfig": ["get-tsconfig@4.13.0", "", { "dependencies": { "resolve-pkg-maps": "^1.0.0" } }, "sha512-1VKTZJCwBrvbd+Wn3AOgQP/2Av+TfTCOlE4AcRJE72W1ksZXbAx8PPBR9RzgTeSPzlPMHrbANMH3LbltH73wxQ=="],
"github-from-package": ["github-from-package@0.0.0", "", {}, "sha512-SyHy3T1v2NUXn29OsWdxmK6RwHD+vkj3v8en8AOBZ1wBQ/hCAQ5bAQTD02kW4W9tUp/3Qh6J8r9EvntiyCmOOw=="],
"ieee754": ["ieee754@1.2.1", "", {}, "sha512-dcyqhDvX1C46lXZcVqCpK+FtMRQVdIMN6/Df5js2zouUsqG7I6sFxitIC+7KYK29KdXOLHdu9zL4sFnoVQnqaA=="],
"inherits": ["inherits@2.0.4", "", {}, "sha512-k/vGaX4/Yla3WzyMCvTQOXYeIHvqOKtnqBduzTHpzpQZzAskKMhZ2K+EnBiSM9zGSoIFeMpXKxa4dYeZIQqewQ=="],
"ini": ["ini@1.3.8", "", {}, "sha512-JV/yugV2uzW5iMRSiZAyDtQd+nxtUnjeLt0acNdw98kKLrvuRVyB80tsREOE7yvGVgalhZ6RNXCmEHkUKBKxew=="],
"jose": ["jose@6.1.3", "", {}, "sha512-0TpaTfihd4QMNwrz/ob2Bp7X04yuxJkjRGi4aKmOqwhov54i6u79oCv7T+C7lo70MKH6BesI3vscD1yb/yzKXQ=="],
"js-base64": ["js-base64@3.7.8", "", {}, "sha512-hNngCeKxIUQiEUN3GPJOkz4wF/YvdUdbNL9hsBcMQTkKzboD7T/q3OYOuuPZLUE6dBxSGpwhk5mwuDud7JVAow=="],
@@ -174,40 +203,82 @@
"memoirist": ["memoirist@0.4.0", "", {}, "sha512-zxTgA0mSYELa66DimuNQDvyLq36AwDlTuVRbnQtB+VuTcKWm5Qc4z3WkSpgsFWHNhexqkIooqpv4hdcqrX5Nmg=="],
"mimic-response": ["mimic-response@3.1.0", "", {}, "sha512-z0yWI+4FDrrweS8Zmt4Ej5HdJmky15+L2e6Wgn3+iK5fWzb6T3fhNFq2+MeTRb064c6Wr4N/wv0DzQTjNzHNGQ=="],
"minimist": ["minimist@1.2.8", "", {}, "sha512-2yyAR8qBkN3YuheJanUpWC5U3bb5osDywNB8RzDVlDwDHbocAJveqqj1u8+SVD7jkWT4yvsHCpWqqWqAxb0zCA=="],
"mkdirp-classic": ["mkdirp-classic@0.5.3", "", {}, "sha512-gKLcREMhtuZRwRAfqP3RFW+TK4JqApVBtOIftVgjuABpAtpxhPGaDcfvbhNvD0B8iD1oUr/txX35NjcaY6Ns/A=="],
"ms": ["ms@2.1.3", "", {}, "sha512-6FlzubTLZG3J2a/NVCAleEhjzq5oxgHyaCU9yYXvcLsvoVaHJq/s5xXI6/XXP6tz7R9xAOtHnSO/tXtF3WRTlA=="],
"napi-build-utils": ["napi-build-utils@2.0.0", "", {}, "sha512-GEbrYkbfF7MoNaoh2iGG84Mnf/WZfB0GdGEsM8wz7Expx/LlWf5U8t9nvJKXSp3qr5IsEbK04cBGhol/KwOsWA=="],
"node-abi": ["node-abi@3.86.0", "", { "dependencies": { "semver": "^7.3.5" } }, "sha512-sn9Et4N3ynsetj3spsZR729DVlGH6iBG4RiDMV7HEp3guyOW6W3S0unGpLDxT50mXortGUMax/ykUNQXdqc/Xg=="],
"node-domexception": ["node-domexception@1.0.0", "", {}, "sha512-/jKZoMpw0F8GRwl4/eLROPA3cfcXtLApP0QzLmUT/HuPCZWyB7IY9ZrMeKw2O/nFIqPQB3PVM9aYm0F312AXDQ=="],
"node-fetch": ["node-fetch@3.3.2", "", { "dependencies": { "data-uri-to-buffer": "^4.0.0", "fetch-blob": "^3.1.4", "formdata-polyfill": "^4.0.10" } }, "sha512-dRB78srN/l6gqWulah9SrxeYnxeddIG30+GOqK/9OlLVyLg3HPnr6SqOWTWOXKRwC2eGYCkZ59NNuSgvSrpgOA=="],
"once": ["once@1.4.0", "", { "dependencies": { "wrappy": "1" } }, "sha512-lNaJgI+2Q5URQBkccEKHTQOPaXdUxnZZElQTZY0MFUAuaEqe1E+Nyvgdz/aIyNi6Z9MzO5dv1H8n58/GELp3+w=="],
"openapi-types": ["openapi-types@12.1.3", "", {}, "sha512-N4YtSYJqghVu4iek2ZUvcN/0aqH1kRDuNqzcycDxhOUpg7GdvLa2F3DgS6yBNhInhv2r/6I0Flkn7CqL8+nIcw=="],
"prebuild-install": ["prebuild-install@7.1.3", "", { "dependencies": { "detect-libc": "^2.0.0", "expand-template": "^2.0.3", "github-from-package": "0.0.0", "minimist": "^1.2.3", "mkdirp-classic": "^0.5.3", "napi-build-utils": "^2.0.0", "node-abi": "^3.3.0", "pump": "^3.0.0", "rc": "^1.2.7", "simple-get": "^4.0.0", "tar-fs": "^2.0.0", "tunnel-agent": "^0.6.0" }, "bin": { "prebuild-install": "bin.js" } }, "sha512-8Mf2cbV7x1cXPUILADGI3wuhfqWvtiLA1iclTDbFRZkgRQS0NqsPZphna9V+HyTEadheuPmjaJMsbzKQFOzLug=="],
"promise-limit": ["promise-limit@2.7.0", "", {}, "sha512-7nJ6v5lnJsXwGprnGXga4wx6d1POjvi5Qmf1ivTRxTjH4Z/9Czja/UCMLVmB9N93GeWOU93XaFaEt6jbuoagNw=="],
"pump": ["pump@3.0.3", "", { "dependencies": { "end-of-stream": "^1.1.0", "once": "^1.3.1" } }, "sha512-todwxLMY7/heScKmntwQG8CXVkWUOdYxIvY2s0VWAAMh/nd8SoYiRaKjlr7+iCs984f2P8zvrfWcDDYVb73NfA=="],
"rc": ["rc@1.2.8", "", { "dependencies": { "deep-extend": "^0.6.0", "ini": "~1.3.0", "minimist": "^1.2.0", "strip-json-comments": "~2.0.1" }, "bin": { "rc": "./cli.js" } }, "sha512-y3bGgqKj3QBdxLbLkomlohkvsA8gdAiUQlSBJnBhfn+BPxg4bc62d8TcBW15wavDfgexCgccckhcZvywyQYPOw=="],
"readable-stream": ["readable-stream@3.6.2", "", { "dependencies": { "inherits": "^2.0.3", "string_decoder": "^1.1.1", "util-deprecate": "^1.0.1" } }, "sha512-9u/sniCrY3D5WdsERHzHE4G2YCXqoG5FTHUiCC4SIbr6XcLZBY05ya9EKjYek9O5xOAwjGq+1JdGBAS7Q9ScoA=="],
"resolve-pkg-maps": ["resolve-pkg-maps@1.0.0", "", {}, "sha512-seS2Tj26TBVOC2NIc2rOe2y2ZO7efxITtLZcGSOnHHNOQ7CkiUBfw0Iw2ck6xkIhPwLhKNLS8BO+hEpngQlqzw=="],
"safe-buffer": ["safe-buffer@5.2.1", "", {}, "sha512-rp3So07KcdmmKbGvgaNxQSJr7bGVSVk5S9Eq1F+ppbRo70+YeaDxkw5Dd8NPN+GD6bjnYm2VuPuCXmpuYvmCXQ=="],
"semver": ["semver@7.7.3", "", { "bin": { "semver": "bin/semver.js" } }, "sha512-SdsKMrI9TdgjdweUSR9MweHA4EJ8YxHn8DFaDisvhVlUOe4BF1tLD7GAj0lIqWVl+dPb/rExr0Btby5loQm20Q=="],
"simple-concat": ["simple-concat@1.0.1", "", {}, "sha512-cSFtAPtRhljv69IK0hTVZQ+OfE9nePi/rtJmw5UjHeVyVroEqJXP1sFztKUy1qU+xvz3u/sfYJLa947b7nAN2Q=="],
"simple-get": ["simple-get@4.0.1", "", { "dependencies": { "decompress-response": "^6.0.0", "once": "^1.3.1", "simple-concat": "^1.0.0" } }, "sha512-brv7p5WgH0jmQJr1ZDDfKDOSeWWg+OVypG99A/5vYGPqJ6pxiaHLy8nxtFjBA7oMa01ebA9gfh1uMCFqOuXxvA=="],
"source-map": ["source-map@0.6.1", "", {}, "sha512-UjgapumWlbMhkBgzT7Ykc5YXUT46F0iKu8SGXq0bcwP5dz/h0Plj6enJqjz1Zbq2l5WaqYnrVbwWOWMyF3F47g=="],
"source-map-support": ["source-map-support@0.5.21", "", { "dependencies": { "buffer-from": "^1.0.0", "source-map": "^0.6.0" } }, "sha512-uBHU3L3czsIyYXKX88fdrGovxdSCoTGDRZ6SYXtSRxLZUzHg5P/66Ht6uoUlHu9EZod+inXhKo3qQgwXUT/y1w=="],
"string_decoder": ["string_decoder@1.3.0", "", { "dependencies": { "safe-buffer": "~5.2.0" } }, "sha512-hkRX8U1WjJFd8LsDJ2yQ/wWWxaopEsABU1XfkM8A+j0+85JAGppt16cr1Whg6KIbb4okU6Mql6BOj+uup/wKeA=="],
"strip-json-comments": ["strip-json-comments@2.0.1", "", {}, "sha512-4gB8na07fecVVkOI6Rs4e7T6NOTki5EmL7TUduTs6bu3EdnSycntVJ4re8kgZA+wx9IueI2Y11bfbgwtzuE0KQ=="],
"strtok3": ["strtok3@10.3.4", "", { "dependencies": { "@tokenizer/token": "^0.3.0" } }, "sha512-KIy5nylvC5le1OdaaoCJ07L+8iQzJHGH6pWDuzS+d07Cu7n1MZ2x26P8ZKIWfbK02+XIL8Mp4RkWeqdUCrDMfg=="],
"tar-fs": ["tar-fs@2.1.4", "", { "dependencies": { "chownr": "^1.1.1", "mkdirp-classic": "^0.5.2", "pump": "^3.0.0", "tar-stream": "^2.1.4" } }, "sha512-mDAjwmZdh7LTT6pNleZ05Yt65HC3E+NiQzl672vQG38jIrehtJk/J3mNwIg+vShQPcLF/LV7CMnDW6vjj6sfYQ=="],
"tar-stream": ["tar-stream@2.2.0", "", { "dependencies": { "bl": "^4.0.3", "end-of-stream": "^1.4.1", "fs-constants": "^1.0.0", "inherits": "^2.0.3", "readable-stream": "^3.1.1" } }, "sha512-ujeqbceABgwMZxEJnk2HDY2DlnUZ+9oEcb1KzTVfYHio0UE6dG71n60d8D2I4qNvleWrrXpmjpt7vZeF1LnMZQ=="],
"token-types": ["token-types@6.1.2", "", { "dependencies": { "@borewit/text-codec": "^0.2.1", "@tokenizer/token": "^0.3.0", "ieee754": "^1.2.1" } }, "sha512-dRXchy+C0IgK8WPC6xvCHFRIWYUbqqdEIKPaKo/AcTUNzwLTK6AH7RjdLWsEZcAN/TBdtfUw3PYEgPr5VPr6ww=="],
"tr46": ["tr46@0.0.3", "", {}, "sha512-N3WMsuqV66lT30CrXNbEjx4GEwlow3v6rr4mCcv6prnfwhS01rkgyFdjPNBYd9br7LpXV1+Emh01fHnq2Gdgrw=="],
"tunnel-agent": ["tunnel-agent@0.6.0", "", { "dependencies": { "safe-buffer": "^5.0.1" } }, "sha512-McnNiV1l8RYeY8tBgEpuodCC1mLUdbSN+CYBL7kJsJNInOP8UjDDEwdk6Mw60vdLLrr5NHKZhMAOSrR2NZuQ+w=="],
"typescript": ["typescript@5.9.3", "", { "bin": { "tsc": "bin/tsc", "tsserver": "bin/tsserver" } }, "sha512-jl1vZzPDinLr9eUt3J/t7V6FgNEw9QjvBPdysz9KfQDD41fQrC2Y4vKQdiaUpFT4bXlb1RHhLpp8wtm6M5TgSw=="],
"uint8array-extras": ["uint8array-extras@1.5.0", "", {}, "sha512-rvKSBiC5zqCCiDZ9kAOszZcDvdAHwwIKJG33Ykj43OKcWsnmcBRL09YTU4nOeHZ8Y2a7l1MgTd08SBe9A8Qj6A=="],
"undici-types": ["undici-types@7.16.0", "", {}, "sha512-Zz+aZWSj8LE6zoxD+xrjh4VfkIG8Ya6LvYkZqtUQGJPZjYl53ypCaUwWqo7eI0x66KBGeRo+mlBEkMSeSZ38Nw=="],
"util-deprecate": ["util-deprecate@1.0.2", "", {}, "sha512-EPD5q1uXyFxJpCrLnCc1nHnq3gOa6DZBocAIiI2TaSCA7VCJ1UJDMagCzIkXNsUYfD1daK//LTEQ8xiIbrHtcw=="],
"web-streams-polyfill": ["web-streams-polyfill@3.3.3", "", {}, "sha512-d2JWLCivmZYTSIoge9MsgFCZrt571BikcWGYkjC1khllbTeDlGqZ2D8vD8E/lJa8WGWbb7Plm8/XJYV7IJHZZw=="],
"webidl-conversions": ["webidl-conversions@3.0.1", "", {}, "sha512-2JAn3z8AR6rjK8Sm8orRC0h/bcl/DqL7tRPdGZ4I1CjdF+EaMLmYxBHyXuKL849eucPFhvBoxMsflfOb8kxaeQ=="],
"whatwg-url": ["whatwg-url@5.0.0", "", { "dependencies": { "tr46": "~0.0.3", "webidl-conversions": "^3.0.0" } }, "sha512-saE57nupxk6v3HY35+jzBwYa0rKSy0XR8JSxZPwgLr7ys0IBzhGviA1/TUGJLmSVqs8pb9AnvICXEuOHLprYTw=="],
"wrappy": ["wrappy@1.0.2", "", {}, "sha512-l4Sp/DRseor9wL6EvV2+TuQn63dMkPjZ/sp9XkghTEbV9KlPS1xUsZ3u7/IQO4wxtcFB4bgpQPRcR3QCvezPcQ=="],
"ws": ["ws@8.19.0", "", { "peerDependencies": { "bufferutil": "^4.0.1", "utf-8-validate": ">=5.0.2" }, "optionalPeers": ["bufferutil", "utf-8-validate"] }, "sha512-blAT2mjOEIi0ZzruJfIhb3nps74PRWTCz1IjglWEEpQl5XS/UNama6u2/rjFkDDouqr4L67ry+1aGIALViWjDg=="],
"@esbuild-kit/core-utils/esbuild": ["esbuild@0.18.20", "", { "optionalDependencies": { "@esbuild/android-arm": "0.18.20", "@esbuild/android-arm64": "0.18.20", "@esbuild/android-x64": "0.18.20", "@esbuild/darwin-arm64": "0.18.20", "@esbuild/darwin-x64": "0.18.20", "@esbuild/freebsd-arm64": "0.18.20", "@esbuild/freebsd-x64": "0.18.20", "@esbuild/linux-arm": "0.18.20", "@esbuild/linux-arm64": "0.18.20", "@esbuild/linux-ia32": "0.18.20", "@esbuild/linux-loong64": "0.18.20", "@esbuild/linux-mips64el": "0.18.20", "@esbuild/linux-ppc64": "0.18.20", "@esbuild/linux-riscv64": "0.18.20", "@esbuild/linux-s390x": "0.18.20", "@esbuild/linux-x64": "0.18.20", "@esbuild/netbsd-x64": "0.18.20", "@esbuild/openbsd-x64": "0.18.20", "@esbuild/sunos-x64": "0.18.20", "@esbuild/win32-arm64": "0.18.20", "@esbuild/win32-ia32": "0.18.20", "@esbuild/win32-x64": "0.18.20" }, "bin": { "esbuild": "bin/esbuild" } }, "sha512-ceqxoedUrcayh7Y7ZX6NdbbDzGROiyVBgC4PriJThBKSVPWnnFHZAkfI1lJT8QFkOwH4qOS2SJkS4wvpGl8BpA=="],

33
bunfig.toml Normal file
View File

@@ -0,0 +1,33 @@
# Bun Configuration
# https://bun.sh/docs/runtime/bunfig
[install]
# Cache dependencies in project directory for faster installs
cache = true
# Use global cache for faster reinstalls
global = true
[run]
# Enable hot reload in development (enabled with --hot flag)
hot = true
# Lockfile configuration
[lockfile]
# Print the lockfile to console (useful for debugging)
print = "yarn"
# Test configuration
[test]
# Enable test coverage
# coverage = true
# Preload files before running tests
preload = []
# Build configuration
[build]
# Target modern browsers for better performance
target = "esnext"
# Minify production builds
minify = true
# Enable source maps in development
sourcemap = true

31
docker-compose.yml Normal file
View File

@@ -0,0 +1,31 @@
services:
quickawards:
build:
context: .
dockerfile: Dockerfile
container_name: quickawards
restart: unless-stopped
ports:
- "3001:3001"
environment:
# Application settings
NODE_ENV: production
PORT: 3001
LOG_LEVEL: info
# Security - IMPORTANT: Change these in production!
JWT_SECRET: ${JWT_SECRET:-change-this-in-production}
# CORS - Set to your domain in production
VITE_APP_URL: ${VITE_APP_URL:-}
ALLOWED_ORIGINS: ${ALLOWED_ORIGINS:-}
volumes:
# Host-mounted database directory
# Database will be created at ./data/award.db on first startup
- ./data:/data
healthcheck:
test: ["CMD", "wget", "--no-verbose", "--tries=1", "--spider", "http://localhost:3001/api/health"]
interval: 30s
timeout: 10s
retries: 3
start_period: 40s

62
docker-entrypoint.sh Normal file
View File

@@ -0,0 +1,62 @@
#!/bin/sh
set -e
# Docker container entrypoint script
# Handles database initialization on first startup
echo "=========================================="
echo "Quickawards - Docker Entrypoint"
echo "=========================================="
# Database location in volume mount
DB_PATH="/data/award.db"
TEMPLATE_DB="/app/award.db.template"
APP_DB_PATH="/app/src/backend/award.db"
# Check if database exists in the volume
if [ ! -f "$DB_PATH" ]; then
echo ""
echo "📦 Database not found in volume mount."
echo " Initializing from template database..."
echo ""
# Copy the template database (created during build with drizzle-kit push)
cp "$TEMPLATE_DB" "$DB_PATH"
# Ensure proper permissions
chmod 644 "$DB_PATH"
echo "✅ Database initialized at: $DB_PATH"
echo " This database will persist in the Docker volume."
else
echo ""
echo "✅ Existing database found at: $DB_PATH"
echo " Using existing database from volume mount."
fi
# Create symlink from app's expected db location to volume mount
# The app expects the database at src/backend/award.db
# We create a symlink so it points to the volume-mounted database
if [ -L "$APP_DB_PATH" ]; then
# Symlink already exists, remove it to refresh
rm "$APP_DB_PATH"
elif [ -e "$APP_DB_PATH" ]; then
# File or directory exists (shouldn't happen in production, but handle it)
echo "⚠ Warning: Found existing database at $APP_DB_PATH, removing..."
rm -f "$APP_DB_PATH"
fi
# Create symlink to the volume-mounted database
ln -s "$DB_PATH" "$APP_DB_PATH"
echo "✅ Created symlink: $APP_DB_PATH -> $DB_PATH"
echo ""
echo "=========================================="
echo "Starting Quickawards application..."
echo "Port: ${PORT:-3001}"
echo "Environment: ${NODE_ENV:-production}"
echo "=========================================="
echo ""
# Execute the main command (passed as CMD in Dockerfile)
exec "$@"

View File

@@ -85,13 +85,17 @@ Main entry point that configures and starts the ElysiaJS server.
- `POST /api/auth/login` - User login
- `GET /api/auth/me` - Get current user
- `PUT /api/auth/lotw-credentials` - Update LoTW credentials
- `PUT /api/auth/dcl-credentials` - Update DCL API key (for future use)
- `PUT /api/auth/dcl-credentials` - Update DCL API key
- `POST /api/lotw/sync` - Sync QSOs from LoTW
- `POST /api/dcl/sync` - Sync QSOs from DCL
- `GET /api/qsos` - Get QSOs with filtering
- `GET /api/qsos/stats` - Get QSO statistics
- `GET /api/awards` - Get all awards
- `GET /api/awards/batch/progress` - Get progress for all awards (optimized)
- `GET /api/awards/:awardId/progress` - Get award progress
- `GET /api/awards/:awardId/entities` - Get entity breakdown
- `GET /api/jobs/:jobId` - Get job status
- `GET /api/jobs/active` - Get user's active job
#### 2. Database Schema (`src/backend/db/schema/index.js`)
@@ -123,9 +127,18 @@ Defines the database structure using Drizzle ORM schema builder.
- Error handling and retry logic
**DCL Service** (`src/backend/services/dcl.service.js`)
- Stub service for future DARC Community Logbook integration
- Prepared for when DCL provides a download API
- Includes TODO comments for implementation steps
- Full integration with DARC Community Logbook (DCL)
- Fetches QSOs from DCL API
- ADIF parsing with shared parser
- Incremental sync by confirmation date
- DXCC entity priority logic (LoTW > DCL)
- Award cache invalidation after sync
**Cache Service** (`src/backend/services/cache.service.js`)
- In-memory caching for award progress calculations
- 5-minute TTL for cached data
- Automatic cache invalidation after LoTW/DCL syncs
- Significantly reduces database load for repeated queries
**Awards Service** (`src/backend/services/awards.service.js`)
- Award progress calculation
@@ -333,6 +346,159 @@ award/
---
## Performance Optimizations
### Overview
The application implements several performance optimizations to ensure fast response times and efficient resource usage, even with large QSO datasets (10,000+ contacts).
### Database Optimizations
**Performance Indexes**
Seven strategic indexes on the QSO table optimize common query patterns:
```sql
-- Filter queries
idx_qsos_user_band -- Filter by band
idx_qsos_user_mode -- Filter by mode
idx_qsos_user_confirmation -- Filter by LoTW/DCL confirmation
-- Sync operations (most impactful)
idx_qsos_duplicate_check -- Duplicate detection (user_id, callsign, date, time, band, mode)
-- Award calculations
idx_qsos_lotw_confirmed -- LoTW-confirmed QSOs (partial index)
idx_qsos_dcl_confirmed -- DCL-confirmed QSOs (partial index)
-- Sorting
idx_qsos_qso_date -- Date-based sorting
```
**Impact:**
- 80% faster filter queries
- 60% faster sync operations
- 50% faster award calculations
**Usage:**
```bash
bun run db:indexes # Create/update performance indexes
```
### Backend Optimizations
**1. N+1 Query Prevention**
The `getUserQSOs()` function uses SQL COUNT for pagination instead of loading all records:
```javascript
// Before (BAD): Load all, count in memory
const allResults = await db.select().from(qsos).where(...);
const totalCount = allResults.length;
// After (GOOD): Count in SQL
const [{ count }] = await db
.select({ count: sql`CAST(count(*) AS INTEGER)` })
.from(qsos)
.where(...);
```
**Impact:**
- 90% memory reduction for large QSO lists
- 70% faster response times
**2. Award Progress Caching**
In-memory cache reduces expensive database aggregations:
```javascript
// Cache with 5-minute TTL
const cached = getCachedAwardProgress(userId, awardId);
if (cached) return cached;
// Calculate and cache
const result = await calculateAwardProgress(userId, award);
setCachedAwardProgress(userId, awardId, result);
```
**Impact:**
- 95% faster for cached requests
- Auto-invalidation after LoTW/DCL syncs
- Significantly reduced database load
**3. Batch API Endpoints**
Single request replaces multiple individual requests:
```javascript
// GET /api/awards/batch/progress
// Returns progress for all awards in one response
```
**Impact:**
- 95% reduction in API calls
- Awards page load: 5 seconds → 500ms
### Frontend Optimizations
**Component Extraction**
Modular components improve re-render performance:
- `QSOStats.svelte`: Statistics display
- `SyncButton.svelte`: Reusable sync button (LoTW & DCL)
**Impact:**
- Reduced component re-renders
- Better code maintainability
- Improved testability
**Batch API Calls**
Awards page loads all progress in a single request instead of N individual calls.
**Impact:**
- Faster page load
- Reduced server load
- Better UX
### Deployment Optimizations
**Bun Configuration**
`bunfig.toml` optimizes builds and development:
```toml
[build]
target = "esnext" # Modern browsers
minify = true # Smaller bundles
sourcemap = true # Better debugging
```
**Production Templates**
`.env.production.template` provides production-ready configuration.
### Monitoring & Debugging
**Cache Statistics**
```javascript
import { getCacheStats } from './services/cache.service.js';
const stats = getCacheStats();
// Returns: { total, valid, expired, ttl }
```
**Index Verification**
```bash
# Verify indexes are created
sqlite3 award.db ".indexes qsos"
```
---
## Awards System
### Overview

View File

@@ -0,0 +1,25 @@
CREATE TABLE `admin_actions` (
`id` integer PRIMARY KEY AUTOINCREMENT NOT NULL,
`admin_id` integer NOT NULL,
`action_type` text NOT NULL,
`target_user_id` integer,
`details` text,
`created_at` integer NOT NULL,
FOREIGN KEY (`admin_id`) REFERENCES `users`(`id`) ON UPDATE no action ON DELETE no action,
FOREIGN KEY (`target_user_id`) REFERENCES `users`(`id`) ON UPDATE no action ON DELETE no action
);
--> statement-breakpoint
CREATE TABLE `qso_changes` (
`id` integer PRIMARY KEY AUTOINCREMENT NOT NULL,
`job_id` integer NOT NULL,
`qso_id` integer,
`change_type` text NOT NULL,
`before_data` text,
`after_data` text,
`created_at` integer NOT NULL,
FOREIGN KEY (`job_id`) REFERENCES `sync_jobs`(`id`) ON UPDATE no action ON DELETE no action,
FOREIGN KEY (`qso_id`) REFERENCES `qsos`(`id`) ON UPDATE no action ON DELETE no action
);
--> statement-breakpoint
ALTER TABLE `users` ADD `role` text DEFAULT 'user' NOT NULL;--> statement-breakpoint
ALTER TABLE `users` ADD `is_admin` integer DEFAULT false NOT NULL;

View File

@@ -0,0 +1 @@
ALTER TABLE `users` DROP COLUMN `role`;

View File

@@ -0,0 +1,756 @@
{
"version": "6",
"dialect": "sqlite",
"id": "542bddc5-2e08-49af-91b5-013a6c9584df",
"prevId": "b5c00e60-2f3c-4c2b-a540-0be8d9e856e6",
"tables": {
"admin_actions": {
"name": "admin_actions",
"columns": {
"id": {
"name": "id",
"type": "integer",
"primaryKey": true,
"notNull": true,
"autoincrement": true
},
"admin_id": {
"name": "admin_id",
"type": "integer",
"primaryKey": false,
"notNull": true,
"autoincrement": false
},
"action_type": {
"name": "action_type",
"type": "text",
"primaryKey": false,
"notNull": true,
"autoincrement": false
},
"target_user_id": {
"name": "target_user_id",
"type": "integer",
"primaryKey": false,
"notNull": false,
"autoincrement": false
},
"details": {
"name": "details",
"type": "text",
"primaryKey": false,
"notNull": false,
"autoincrement": false
},
"created_at": {
"name": "created_at",
"type": "integer",
"primaryKey": false,
"notNull": true,
"autoincrement": false
}
},
"indexes": {},
"foreignKeys": {
"admin_actions_admin_id_users_id_fk": {
"name": "admin_actions_admin_id_users_id_fk",
"tableFrom": "admin_actions",
"tableTo": "users",
"columnsFrom": [
"admin_id"
],
"columnsTo": [
"id"
],
"onDelete": "no action",
"onUpdate": "no action"
},
"admin_actions_target_user_id_users_id_fk": {
"name": "admin_actions_target_user_id_users_id_fk",
"tableFrom": "admin_actions",
"tableTo": "users",
"columnsFrom": [
"target_user_id"
],
"columnsTo": [
"id"
],
"onDelete": "no action",
"onUpdate": "no action"
}
},
"compositePrimaryKeys": {},
"uniqueConstraints": {},
"checkConstraints": {}
},
"award_progress": {
"name": "award_progress",
"columns": {
"id": {
"name": "id",
"type": "integer",
"primaryKey": true,
"notNull": true,
"autoincrement": true
},
"user_id": {
"name": "user_id",
"type": "integer",
"primaryKey": false,
"notNull": true,
"autoincrement": false
},
"award_id": {
"name": "award_id",
"type": "text",
"primaryKey": false,
"notNull": true,
"autoincrement": false
},
"worked_count": {
"name": "worked_count",
"type": "integer",
"primaryKey": false,
"notNull": true,
"autoincrement": false,
"default": 0
},
"confirmed_count": {
"name": "confirmed_count",
"type": "integer",
"primaryKey": false,
"notNull": true,
"autoincrement": false,
"default": 0
},
"total_required": {
"name": "total_required",
"type": "integer",
"primaryKey": false,
"notNull": true,
"autoincrement": false
},
"worked_entities": {
"name": "worked_entities",
"type": "text",
"primaryKey": false,
"notNull": false,
"autoincrement": false
},
"confirmed_entities": {
"name": "confirmed_entities",
"type": "text",
"primaryKey": false,
"notNull": false,
"autoincrement": false
},
"last_calculated_at": {
"name": "last_calculated_at",
"type": "integer",
"primaryKey": false,
"notNull": false,
"autoincrement": false
},
"last_qso_sync_at": {
"name": "last_qso_sync_at",
"type": "integer",
"primaryKey": false,
"notNull": false,
"autoincrement": false
},
"updated_at": {
"name": "updated_at",
"type": "integer",
"primaryKey": false,
"notNull": true,
"autoincrement": false
}
},
"indexes": {},
"foreignKeys": {
"award_progress_user_id_users_id_fk": {
"name": "award_progress_user_id_users_id_fk",
"tableFrom": "award_progress",
"tableTo": "users",
"columnsFrom": [
"user_id"
],
"columnsTo": [
"id"
],
"onDelete": "no action",
"onUpdate": "no action"
},
"award_progress_award_id_awards_id_fk": {
"name": "award_progress_award_id_awards_id_fk",
"tableFrom": "award_progress",
"tableTo": "awards",
"columnsFrom": [
"award_id"
],
"columnsTo": [
"id"
],
"onDelete": "no action",
"onUpdate": "no action"
}
},
"compositePrimaryKeys": {},
"uniqueConstraints": {},
"checkConstraints": {}
},
"awards": {
"name": "awards",
"columns": {
"id": {
"name": "id",
"type": "text",
"primaryKey": true,
"notNull": true,
"autoincrement": false
},
"name": {
"name": "name",
"type": "text",
"primaryKey": false,
"notNull": true,
"autoincrement": false
},
"description": {
"name": "description",
"type": "text",
"primaryKey": false,
"notNull": false,
"autoincrement": false
},
"definition": {
"name": "definition",
"type": "text",
"primaryKey": false,
"notNull": true,
"autoincrement": false
},
"is_active": {
"name": "is_active",
"type": "integer",
"primaryKey": false,
"notNull": true,
"autoincrement": false,
"default": true
},
"created_at": {
"name": "created_at",
"type": "integer",
"primaryKey": false,
"notNull": true,
"autoincrement": false
}
},
"indexes": {},
"foreignKeys": {},
"compositePrimaryKeys": {},
"uniqueConstraints": {},
"checkConstraints": {}
},
"qso_changes": {
"name": "qso_changes",
"columns": {
"id": {
"name": "id",
"type": "integer",
"primaryKey": true,
"notNull": true,
"autoincrement": true
},
"job_id": {
"name": "job_id",
"type": "integer",
"primaryKey": false,
"notNull": true,
"autoincrement": false
},
"qso_id": {
"name": "qso_id",
"type": "integer",
"primaryKey": false,
"notNull": false,
"autoincrement": false
},
"change_type": {
"name": "change_type",
"type": "text",
"primaryKey": false,
"notNull": true,
"autoincrement": false
},
"before_data": {
"name": "before_data",
"type": "text",
"primaryKey": false,
"notNull": false,
"autoincrement": false
},
"after_data": {
"name": "after_data",
"type": "text",
"primaryKey": false,
"notNull": false,
"autoincrement": false
},
"created_at": {
"name": "created_at",
"type": "integer",
"primaryKey": false,
"notNull": true,
"autoincrement": false
}
},
"indexes": {},
"foreignKeys": {
"qso_changes_job_id_sync_jobs_id_fk": {
"name": "qso_changes_job_id_sync_jobs_id_fk",
"tableFrom": "qso_changes",
"tableTo": "sync_jobs",
"columnsFrom": [
"job_id"
],
"columnsTo": [
"id"
],
"onDelete": "no action",
"onUpdate": "no action"
},
"qso_changes_qso_id_qsos_id_fk": {
"name": "qso_changes_qso_id_qsos_id_fk",
"tableFrom": "qso_changes",
"tableTo": "qsos",
"columnsFrom": [
"qso_id"
],
"columnsTo": [
"id"
],
"onDelete": "no action",
"onUpdate": "no action"
}
},
"compositePrimaryKeys": {},
"uniqueConstraints": {},
"checkConstraints": {}
},
"qsos": {
"name": "qsos",
"columns": {
"id": {
"name": "id",
"type": "integer",
"primaryKey": true,
"notNull": true,
"autoincrement": true
},
"user_id": {
"name": "user_id",
"type": "integer",
"primaryKey": false,
"notNull": true,
"autoincrement": false
},
"callsign": {
"name": "callsign",
"type": "text",
"primaryKey": false,
"notNull": true,
"autoincrement": false
},
"qso_date": {
"name": "qso_date",
"type": "text",
"primaryKey": false,
"notNull": true,
"autoincrement": false
},
"time_on": {
"name": "time_on",
"type": "text",
"primaryKey": false,
"notNull": true,
"autoincrement": false
},
"band": {
"name": "band",
"type": "text",
"primaryKey": false,
"notNull": false,
"autoincrement": false
},
"mode": {
"name": "mode",
"type": "text",
"primaryKey": false,
"notNull": false,
"autoincrement": false
},
"freq": {
"name": "freq",
"type": "integer",
"primaryKey": false,
"notNull": false,
"autoincrement": false
},
"freq_rx": {
"name": "freq_rx",
"type": "integer",
"primaryKey": false,
"notNull": false,
"autoincrement": false
},
"entity": {
"name": "entity",
"type": "text",
"primaryKey": false,
"notNull": false,
"autoincrement": false
},
"entity_id": {
"name": "entity_id",
"type": "integer",
"primaryKey": false,
"notNull": false,
"autoincrement": false
},
"grid": {
"name": "grid",
"type": "text",
"primaryKey": false,
"notNull": false,
"autoincrement": false
},
"grid_source": {
"name": "grid_source",
"type": "text",
"primaryKey": false,
"notNull": false,
"autoincrement": false
},
"continent": {
"name": "continent",
"type": "text",
"primaryKey": false,
"notNull": false,
"autoincrement": false
},
"cq_zone": {
"name": "cq_zone",
"type": "integer",
"primaryKey": false,
"notNull": false,
"autoincrement": false
},
"itu_zone": {
"name": "itu_zone",
"type": "integer",
"primaryKey": false,
"notNull": false,
"autoincrement": false
},
"state": {
"name": "state",
"type": "text",
"primaryKey": false,
"notNull": false,
"autoincrement": false
},
"county": {
"name": "county",
"type": "text",
"primaryKey": false,
"notNull": false,
"autoincrement": false
},
"sat_name": {
"name": "sat_name",
"type": "text",
"primaryKey": false,
"notNull": false,
"autoincrement": false
},
"sat_mode": {
"name": "sat_mode",
"type": "text",
"primaryKey": false,
"notNull": false,
"autoincrement": false
},
"my_darc_dok": {
"name": "my_darc_dok",
"type": "text",
"primaryKey": false,
"notNull": false,
"autoincrement": false
},
"darc_dok": {
"name": "darc_dok",
"type": "text",
"primaryKey": false,
"notNull": false,
"autoincrement": false
},
"lotw_qsl_rdate": {
"name": "lotw_qsl_rdate",
"type": "text",
"primaryKey": false,
"notNull": false,
"autoincrement": false
},
"lotw_qsl_rstatus": {
"name": "lotw_qsl_rstatus",
"type": "text",
"primaryKey": false,
"notNull": false,
"autoincrement": false
},
"dcl_qsl_rdate": {
"name": "dcl_qsl_rdate",
"type": "text",
"primaryKey": false,
"notNull": false,
"autoincrement": false
},
"dcl_qsl_rstatus": {
"name": "dcl_qsl_rstatus",
"type": "text",
"primaryKey": false,
"notNull": false,
"autoincrement": false
},
"lotw_synced_at": {
"name": "lotw_synced_at",
"type": "integer",
"primaryKey": false,
"notNull": false,
"autoincrement": false
},
"created_at": {
"name": "created_at",
"type": "integer",
"primaryKey": false,
"notNull": true,
"autoincrement": false
}
},
"indexes": {},
"foreignKeys": {
"qsos_user_id_users_id_fk": {
"name": "qsos_user_id_users_id_fk",
"tableFrom": "qsos",
"tableTo": "users",
"columnsFrom": [
"user_id"
],
"columnsTo": [
"id"
],
"onDelete": "no action",
"onUpdate": "no action"
}
},
"compositePrimaryKeys": {},
"uniqueConstraints": {},
"checkConstraints": {}
},
"sync_jobs": {
"name": "sync_jobs",
"columns": {
"id": {
"name": "id",
"type": "integer",
"primaryKey": true,
"notNull": true,
"autoincrement": true
},
"user_id": {
"name": "user_id",
"type": "integer",
"primaryKey": false,
"notNull": true,
"autoincrement": false
},
"status": {
"name": "status",
"type": "text",
"primaryKey": false,
"notNull": true,
"autoincrement": false
},
"type": {
"name": "type",
"type": "text",
"primaryKey": false,
"notNull": true,
"autoincrement": false
},
"started_at": {
"name": "started_at",
"type": "integer",
"primaryKey": false,
"notNull": false,
"autoincrement": false
},
"completed_at": {
"name": "completed_at",
"type": "integer",
"primaryKey": false,
"notNull": false,
"autoincrement": false
},
"result": {
"name": "result",
"type": "text",
"primaryKey": false,
"notNull": false,
"autoincrement": false
},
"error": {
"name": "error",
"type": "text",
"primaryKey": false,
"notNull": false,
"autoincrement": false
},
"created_at": {
"name": "created_at",
"type": "integer",
"primaryKey": false,
"notNull": true,
"autoincrement": false
}
},
"indexes": {},
"foreignKeys": {
"sync_jobs_user_id_users_id_fk": {
"name": "sync_jobs_user_id_users_id_fk",
"tableFrom": "sync_jobs",
"tableTo": "users",
"columnsFrom": [
"user_id"
],
"columnsTo": [
"id"
],
"onDelete": "no action",
"onUpdate": "no action"
}
},
"compositePrimaryKeys": {},
"uniqueConstraints": {},
"checkConstraints": {}
},
"users": {
"name": "users",
"columns": {
"id": {
"name": "id",
"type": "integer",
"primaryKey": true,
"notNull": true,
"autoincrement": true
},
"email": {
"name": "email",
"type": "text",
"primaryKey": false,
"notNull": true,
"autoincrement": false
},
"password_hash": {
"name": "password_hash",
"type": "text",
"primaryKey": false,
"notNull": true,
"autoincrement": false
},
"callsign": {
"name": "callsign",
"type": "text",
"primaryKey": false,
"notNull": true,
"autoincrement": false
},
"lotw_username": {
"name": "lotw_username",
"type": "text",
"primaryKey": false,
"notNull": false,
"autoincrement": false
},
"lotw_password": {
"name": "lotw_password",
"type": "text",
"primaryKey": false,
"notNull": false,
"autoincrement": false
},
"dcl_api_key": {
"name": "dcl_api_key",
"type": "text",
"primaryKey": false,
"notNull": false,
"autoincrement": false
},
"role": {
"name": "role",
"type": "text",
"primaryKey": false,
"notNull": true,
"autoincrement": false,
"default": "'user'"
},
"is_admin": {
"name": "is_admin",
"type": "integer",
"primaryKey": false,
"notNull": true,
"autoincrement": false,
"default": false
},
"created_at": {
"name": "created_at",
"type": "integer",
"primaryKey": false,
"notNull": true,
"autoincrement": false
},
"updated_at": {
"name": "updated_at",
"type": "integer",
"primaryKey": false,
"notNull": true,
"autoincrement": false
}
},
"indexes": {
"users_email_unique": {
"name": "users_email_unique",
"columns": [
"email"
],
"isUnique": true
}
},
"foreignKeys": {},
"compositePrimaryKeys": {},
"uniqueConstraints": {},
"checkConstraints": {}
}
},
"views": {},
"enums": {},
"_meta": {
"schemas": {},
"tables": {},
"columns": {}
},
"internal": {
"indexes": {}
}
}

View File

@@ -0,0 +1,748 @@
{
"version": "6",
"dialect": "sqlite",
"id": "071c98fb-6721-4da7-98cb-c16cb6aaf0c1",
"prevId": "542bddc5-2e08-49af-91b5-013a6c9584df",
"tables": {
"admin_actions": {
"name": "admin_actions",
"columns": {
"id": {
"name": "id",
"type": "integer",
"primaryKey": true,
"notNull": true,
"autoincrement": true
},
"admin_id": {
"name": "admin_id",
"type": "integer",
"primaryKey": false,
"notNull": true,
"autoincrement": false
},
"action_type": {
"name": "action_type",
"type": "text",
"primaryKey": false,
"notNull": true,
"autoincrement": false
},
"target_user_id": {
"name": "target_user_id",
"type": "integer",
"primaryKey": false,
"notNull": false,
"autoincrement": false
},
"details": {
"name": "details",
"type": "text",
"primaryKey": false,
"notNull": false,
"autoincrement": false
},
"created_at": {
"name": "created_at",
"type": "integer",
"primaryKey": false,
"notNull": true,
"autoincrement": false
}
},
"indexes": {},
"foreignKeys": {
"admin_actions_admin_id_users_id_fk": {
"name": "admin_actions_admin_id_users_id_fk",
"tableFrom": "admin_actions",
"tableTo": "users",
"columnsFrom": [
"admin_id"
],
"columnsTo": [
"id"
],
"onDelete": "no action",
"onUpdate": "no action"
},
"admin_actions_target_user_id_users_id_fk": {
"name": "admin_actions_target_user_id_users_id_fk",
"tableFrom": "admin_actions",
"tableTo": "users",
"columnsFrom": [
"target_user_id"
],
"columnsTo": [
"id"
],
"onDelete": "no action",
"onUpdate": "no action"
}
},
"compositePrimaryKeys": {},
"uniqueConstraints": {},
"checkConstraints": {}
},
"award_progress": {
"name": "award_progress",
"columns": {
"id": {
"name": "id",
"type": "integer",
"primaryKey": true,
"notNull": true,
"autoincrement": true
},
"user_id": {
"name": "user_id",
"type": "integer",
"primaryKey": false,
"notNull": true,
"autoincrement": false
},
"award_id": {
"name": "award_id",
"type": "text",
"primaryKey": false,
"notNull": true,
"autoincrement": false
},
"worked_count": {
"name": "worked_count",
"type": "integer",
"primaryKey": false,
"notNull": true,
"autoincrement": false,
"default": 0
},
"confirmed_count": {
"name": "confirmed_count",
"type": "integer",
"primaryKey": false,
"notNull": true,
"autoincrement": false,
"default": 0
},
"total_required": {
"name": "total_required",
"type": "integer",
"primaryKey": false,
"notNull": true,
"autoincrement": false
},
"worked_entities": {
"name": "worked_entities",
"type": "text",
"primaryKey": false,
"notNull": false,
"autoincrement": false
},
"confirmed_entities": {
"name": "confirmed_entities",
"type": "text",
"primaryKey": false,
"notNull": false,
"autoincrement": false
},
"last_calculated_at": {
"name": "last_calculated_at",
"type": "integer",
"primaryKey": false,
"notNull": false,
"autoincrement": false
},
"last_qso_sync_at": {
"name": "last_qso_sync_at",
"type": "integer",
"primaryKey": false,
"notNull": false,
"autoincrement": false
},
"updated_at": {
"name": "updated_at",
"type": "integer",
"primaryKey": false,
"notNull": true,
"autoincrement": false
}
},
"indexes": {},
"foreignKeys": {
"award_progress_user_id_users_id_fk": {
"name": "award_progress_user_id_users_id_fk",
"tableFrom": "award_progress",
"tableTo": "users",
"columnsFrom": [
"user_id"
],
"columnsTo": [
"id"
],
"onDelete": "no action",
"onUpdate": "no action"
},
"award_progress_award_id_awards_id_fk": {
"name": "award_progress_award_id_awards_id_fk",
"tableFrom": "award_progress",
"tableTo": "awards",
"columnsFrom": [
"award_id"
],
"columnsTo": [
"id"
],
"onDelete": "no action",
"onUpdate": "no action"
}
},
"compositePrimaryKeys": {},
"uniqueConstraints": {},
"checkConstraints": {}
},
"awards": {
"name": "awards",
"columns": {
"id": {
"name": "id",
"type": "text",
"primaryKey": true,
"notNull": true,
"autoincrement": false
},
"name": {
"name": "name",
"type": "text",
"primaryKey": false,
"notNull": true,
"autoincrement": false
},
"description": {
"name": "description",
"type": "text",
"primaryKey": false,
"notNull": false,
"autoincrement": false
},
"definition": {
"name": "definition",
"type": "text",
"primaryKey": false,
"notNull": true,
"autoincrement": false
},
"is_active": {
"name": "is_active",
"type": "integer",
"primaryKey": false,
"notNull": true,
"autoincrement": false,
"default": true
},
"created_at": {
"name": "created_at",
"type": "integer",
"primaryKey": false,
"notNull": true,
"autoincrement": false
}
},
"indexes": {},
"foreignKeys": {},
"compositePrimaryKeys": {},
"uniqueConstraints": {},
"checkConstraints": {}
},
"qso_changes": {
"name": "qso_changes",
"columns": {
"id": {
"name": "id",
"type": "integer",
"primaryKey": true,
"notNull": true,
"autoincrement": true
},
"job_id": {
"name": "job_id",
"type": "integer",
"primaryKey": false,
"notNull": true,
"autoincrement": false
},
"qso_id": {
"name": "qso_id",
"type": "integer",
"primaryKey": false,
"notNull": false,
"autoincrement": false
},
"change_type": {
"name": "change_type",
"type": "text",
"primaryKey": false,
"notNull": true,
"autoincrement": false
},
"before_data": {
"name": "before_data",
"type": "text",
"primaryKey": false,
"notNull": false,
"autoincrement": false
},
"after_data": {
"name": "after_data",
"type": "text",
"primaryKey": false,
"notNull": false,
"autoincrement": false
},
"created_at": {
"name": "created_at",
"type": "integer",
"primaryKey": false,
"notNull": true,
"autoincrement": false
}
},
"indexes": {},
"foreignKeys": {
"qso_changes_job_id_sync_jobs_id_fk": {
"name": "qso_changes_job_id_sync_jobs_id_fk",
"tableFrom": "qso_changes",
"tableTo": "sync_jobs",
"columnsFrom": [
"job_id"
],
"columnsTo": [
"id"
],
"onDelete": "no action",
"onUpdate": "no action"
},
"qso_changes_qso_id_qsos_id_fk": {
"name": "qso_changes_qso_id_qsos_id_fk",
"tableFrom": "qso_changes",
"tableTo": "qsos",
"columnsFrom": [
"qso_id"
],
"columnsTo": [
"id"
],
"onDelete": "no action",
"onUpdate": "no action"
}
},
"compositePrimaryKeys": {},
"uniqueConstraints": {},
"checkConstraints": {}
},
"qsos": {
"name": "qsos",
"columns": {
"id": {
"name": "id",
"type": "integer",
"primaryKey": true,
"notNull": true,
"autoincrement": true
},
"user_id": {
"name": "user_id",
"type": "integer",
"primaryKey": false,
"notNull": true,
"autoincrement": false
},
"callsign": {
"name": "callsign",
"type": "text",
"primaryKey": false,
"notNull": true,
"autoincrement": false
},
"qso_date": {
"name": "qso_date",
"type": "text",
"primaryKey": false,
"notNull": true,
"autoincrement": false
},
"time_on": {
"name": "time_on",
"type": "text",
"primaryKey": false,
"notNull": true,
"autoincrement": false
},
"band": {
"name": "band",
"type": "text",
"primaryKey": false,
"notNull": false,
"autoincrement": false
},
"mode": {
"name": "mode",
"type": "text",
"primaryKey": false,
"notNull": false,
"autoincrement": false
},
"freq": {
"name": "freq",
"type": "integer",
"primaryKey": false,
"notNull": false,
"autoincrement": false
},
"freq_rx": {
"name": "freq_rx",
"type": "integer",
"primaryKey": false,
"notNull": false,
"autoincrement": false
},
"entity": {
"name": "entity",
"type": "text",
"primaryKey": false,
"notNull": false,
"autoincrement": false
},
"entity_id": {
"name": "entity_id",
"type": "integer",
"primaryKey": false,
"notNull": false,
"autoincrement": false
},
"grid": {
"name": "grid",
"type": "text",
"primaryKey": false,
"notNull": false,
"autoincrement": false
},
"grid_source": {
"name": "grid_source",
"type": "text",
"primaryKey": false,
"notNull": false,
"autoincrement": false
},
"continent": {
"name": "continent",
"type": "text",
"primaryKey": false,
"notNull": false,
"autoincrement": false
},
"cq_zone": {
"name": "cq_zone",
"type": "integer",
"primaryKey": false,
"notNull": false,
"autoincrement": false
},
"itu_zone": {
"name": "itu_zone",
"type": "integer",
"primaryKey": false,
"notNull": false,
"autoincrement": false
},
"state": {
"name": "state",
"type": "text",
"primaryKey": false,
"notNull": false,
"autoincrement": false
},
"county": {
"name": "county",
"type": "text",
"primaryKey": false,
"notNull": false,
"autoincrement": false
},
"sat_name": {
"name": "sat_name",
"type": "text",
"primaryKey": false,
"notNull": false,
"autoincrement": false
},
"sat_mode": {
"name": "sat_mode",
"type": "text",
"primaryKey": false,
"notNull": false,
"autoincrement": false
},
"my_darc_dok": {
"name": "my_darc_dok",
"type": "text",
"primaryKey": false,
"notNull": false,
"autoincrement": false
},
"darc_dok": {
"name": "darc_dok",
"type": "text",
"primaryKey": false,
"notNull": false,
"autoincrement": false
},
"lotw_qsl_rdate": {
"name": "lotw_qsl_rdate",
"type": "text",
"primaryKey": false,
"notNull": false,
"autoincrement": false
},
"lotw_qsl_rstatus": {
"name": "lotw_qsl_rstatus",
"type": "text",
"primaryKey": false,
"notNull": false,
"autoincrement": false
},
"dcl_qsl_rdate": {
"name": "dcl_qsl_rdate",
"type": "text",
"primaryKey": false,
"notNull": false,
"autoincrement": false
},
"dcl_qsl_rstatus": {
"name": "dcl_qsl_rstatus",
"type": "text",
"primaryKey": false,
"notNull": false,
"autoincrement": false
},
"lotw_synced_at": {
"name": "lotw_synced_at",
"type": "integer",
"primaryKey": false,
"notNull": false,
"autoincrement": false
},
"created_at": {
"name": "created_at",
"type": "integer",
"primaryKey": false,
"notNull": true,
"autoincrement": false
}
},
"indexes": {},
"foreignKeys": {
"qsos_user_id_users_id_fk": {
"name": "qsos_user_id_users_id_fk",
"tableFrom": "qsos",
"tableTo": "users",
"columnsFrom": [
"user_id"
],
"columnsTo": [
"id"
],
"onDelete": "no action",
"onUpdate": "no action"
}
},
"compositePrimaryKeys": {},
"uniqueConstraints": {},
"checkConstraints": {}
},
"sync_jobs": {
"name": "sync_jobs",
"columns": {
"id": {
"name": "id",
"type": "integer",
"primaryKey": true,
"notNull": true,
"autoincrement": true
},
"user_id": {
"name": "user_id",
"type": "integer",
"primaryKey": false,
"notNull": true,
"autoincrement": false
},
"status": {
"name": "status",
"type": "text",
"primaryKey": false,
"notNull": true,
"autoincrement": false
},
"type": {
"name": "type",
"type": "text",
"primaryKey": false,
"notNull": true,
"autoincrement": false
},
"started_at": {
"name": "started_at",
"type": "integer",
"primaryKey": false,
"notNull": false,
"autoincrement": false
},
"completed_at": {
"name": "completed_at",
"type": "integer",
"primaryKey": false,
"notNull": false,
"autoincrement": false
},
"result": {
"name": "result",
"type": "text",
"primaryKey": false,
"notNull": false,
"autoincrement": false
},
"error": {
"name": "error",
"type": "text",
"primaryKey": false,
"notNull": false,
"autoincrement": false
},
"created_at": {
"name": "created_at",
"type": "integer",
"primaryKey": false,
"notNull": true,
"autoincrement": false
}
},
"indexes": {},
"foreignKeys": {
"sync_jobs_user_id_users_id_fk": {
"name": "sync_jobs_user_id_users_id_fk",
"tableFrom": "sync_jobs",
"tableTo": "users",
"columnsFrom": [
"user_id"
],
"columnsTo": [
"id"
],
"onDelete": "no action",
"onUpdate": "no action"
}
},
"compositePrimaryKeys": {},
"uniqueConstraints": {},
"checkConstraints": {}
},
"users": {
"name": "users",
"columns": {
"id": {
"name": "id",
"type": "integer",
"primaryKey": true,
"notNull": true,
"autoincrement": true
},
"email": {
"name": "email",
"type": "text",
"primaryKey": false,
"notNull": true,
"autoincrement": false
},
"password_hash": {
"name": "password_hash",
"type": "text",
"primaryKey": false,
"notNull": true,
"autoincrement": false
},
"callsign": {
"name": "callsign",
"type": "text",
"primaryKey": false,
"notNull": true,
"autoincrement": false
},
"lotw_username": {
"name": "lotw_username",
"type": "text",
"primaryKey": false,
"notNull": false,
"autoincrement": false
},
"lotw_password": {
"name": "lotw_password",
"type": "text",
"primaryKey": false,
"notNull": false,
"autoincrement": false
},
"dcl_api_key": {
"name": "dcl_api_key",
"type": "text",
"primaryKey": false,
"notNull": false,
"autoincrement": false
},
"is_admin": {
"name": "is_admin",
"type": "integer",
"primaryKey": false,
"notNull": true,
"autoincrement": false,
"default": false
},
"created_at": {
"name": "created_at",
"type": "integer",
"primaryKey": false,
"notNull": true,
"autoincrement": false
},
"updated_at": {
"name": "updated_at",
"type": "integer",
"primaryKey": false,
"notNull": true,
"autoincrement": false
}
},
"indexes": {
"users_email_unique": {
"name": "users_email_unique",
"columns": [
"email"
],
"isUnique": true
}
},
"foreignKeys": {},
"compositePrimaryKeys": {},
"uniqueConstraints": {},
"checkConstraints": {}
}
},
"views": {},
"enums": {},
"_meta": {
"schemas": {},
"tables": {},
"columns": {}
},
"internal": {
"indexes": {}
}
}

View File

@@ -15,6 +15,20 @@
"when": 1768641501799,
"tag": "0001_free_hiroim",
"breakpoints": true
},
{
"idx": 2,
"version": "6",
"when": 1768988121232,
"tag": "0002_nervous_layla_miller",
"breakpoints": true
},
{
"idx": 3,
"version": "6",
"when": 1768989260562,
"tag": "0003_tired_warpath",
"breakpoints": true
}
]
}

View File

@@ -11,10 +11,11 @@
"preview": "cd src/frontend && bun run preview",
"db:generate": "drizzle-kit generate",
"db:push": "drizzle-kit push",
"db:migrate": "drizzle-kit migrate"
"db:migrate": "drizzle-kit migrate",
"db:indexes": "bun src/backend/migrations/add-performance-indexes.js",
"deploy": "bun install && bun run db:push && bun run db:indexes && bun run build"
},
"devDependencies": {
"@libsql/client": "^0.17.0",
"@types/bun": "latest",
"drizzle-kit": "^0.31.8"
},

View File

@@ -1,15 +1,30 @@
import Database from 'bun:sqlite';
import { drizzle } from 'drizzle-orm/bun-sqlite';
import * as schema from './db/schema/index.js';
import { join } from 'path';
import { join, dirname } from 'path';
import { existsSync, mkdirSync, appendFile } from 'fs';
import { fileURLToPath } from 'url';
// ===================================================================
// Configuration
// ===================================================================
// ES module equivalent of __dirname
const __filename = fileURLToPath(import.meta.url);
const __dirname = dirname(__filename);
const isDevelopment = process.env.NODE_ENV !== 'production';
export const JWT_SECRET = process.env.JWT_SECRET || 'your-secret-key-change-in-production';
// SECURITY: Require JWT_SECRET in production - no fallback for security
// This prevents JWT token forgery if environment variable is not set
if (!process.env.JWT_SECRET && !isDevelopment) {
throw new Error(
'FATAL: JWT_SECRET environment variable must be set in production. ' +
'Generate one with: openssl rand -base64 32'
);
}
export const JWT_SECRET = process.env.JWT_SECRET || 'dev-secret-key-change-in-production';
export const LOG_LEVEL = process.env.LOG_LEVEL || (isDevelopment ? 'debug' : 'info');
// ===================================================================
@@ -19,16 +34,46 @@ export const LOG_LEVEL = process.env.LOG_LEVEL || (isDevelopment ? 'debug' : 'in
const logLevels = { debug: 0, info: 1, warn: 2, error: 3 };
const currentLogLevel = logLevels[LOG_LEVEL] ?? 1;
// Log file paths
const logsDir = join(__dirname, '../../logs');
const backendLogFile = join(logsDir, 'backend.log');
// Ensure log directory exists
if (!existsSync(logsDir)) {
mkdirSync(logsDir, { recursive: true });
}
function formatLogMessage(level, message, data) {
const timestamp = new Date().toISOString();
let logMessage = `[${timestamp}] ${level.toUpperCase()}: ${message}`;
if (data && Object.keys(data).length > 0) {
logMessage += ' ' + JSON.stringify(data, null, 2);
}
return logMessage + '\n';
}
function log(level, message, data) {
if (logLevels[level] < currentLogLevel) return;
const timestamp = new Date().toISOString();
const logMessage = `[${timestamp}] ${level.toUpperCase()}: ${message}`;
const logMessage = formatLogMessage(level, message, data);
if (data && Object.keys(data).length > 0) {
console.log(logMessage, JSON.stringify(data, null, 2));
} else {
console.log(logMessage);
// Append to file asynchronously (fire and forget for performance)
appendFile(backendLogFile, logMessage, (err) => {
if (err) console.error('Failed to write to log file:', err);
});
// Also log to console in development
if (isDevelopment) {
const timestamp = new Date().toISOString();
const consoleMessage = `[${timestamp}] ${level.toUpperCase()}: ${message}`;
if (data && Object.keys(data).length > 0) {
console.log(consoleMessage, data);
} else {
console.log(consoleMessage);
}
}
}
@@ -39,6 +84,27 @@ export const logger = {
error: (message, data) => log('error', message, data),
};
// Frontend logger - writes to separate log file
const frontendLogFile = join(logsDir, 'frontend.log');
export function logToFrontend(level, message, data = null, context = {}) {
if (logLevels[level] < currentLogLevel) return;
const timestamp = new Date().toISOString();
let logMessage = `[${timestamp}] [${context.userAgent || 'unknown'}] [${context.userId || 'anonymous'}] ${level.toUpperCase()}: ${message}`;
if (data && Object.keys(data).length > 0) {
logMessage += ' ' + JSON.stringify(data, null, 2);
}
logMessage += '\n';
// Append to frontend log file
appendFile(frontendLogFile, logMessage, (err) => {
if (err) console.error('Failed to write to frontend log file:', err);
});
}
export default logger;
// ===================================================================
@@ -46,7 +112,6 @@ export default logger;
// ===================================================================
// Get the directory containing this config file, then go to parent for db location
const __dirname = new URL('.', import.meta.url).pathname;
const dbPath = join(__dirname, 'award.db');
const sqlite = new Database(dbPath);
@@ -57,6 +122,8 @@ export const db = drizzle({
schema,
});
export { sqlite };
export async function closeDatabase() {
sqlite.close();
}

View File

@@ -9,6 +9,7 @@ import { sqliteTable, text, integer } from 'drizzle-orm/sqlite-core';
* @property {string|null} lotwUsername
* @property {string|null} lotwPassword
* @property {string|null} dclApiKey
* @property {boolean} isAdmin
* @property {Date} createdAt
* @property {Date} updatedAt
*/
@@ -21,6 +22,7 @@ export const users = sqliteTable('users', {
lotwUsername: text('lotw_username'),
lotwPassword: text('lotw_password'), // Encrypted
dclApiKey: text('dcl_api_key'), // DCL API key for future use
isAdmin: integer('is_admin', { mode: 'boolean' }).notNull().default(false),
createdAt: integer('created_at', { mode: 'timestamp' }).notNull().$defaultFn(() => new Date()),
updatedAt: integer('updated_at', { mode: 'timestamp' }).notNull().$defaultFn(() => new Date()),
});
@@ -181,5 +183,45 @@ export const syncJobs = sqliteTable('sync_jobs', {
createdAt: integer('created_at', { mode: 'timestamp' }).notNull().$defaultFn(() => new Date()),
});
/**
* @typedef {Object} QSOChange
* @property {number} id
* @property {number} jobId
* @property {number|null} qsoId
* @property {string} changeType - 'added' or 'updated'
* @property {string|null} beforeData - JSON snapshot before change (for updates)
* @property {string|null} afterData - JSON snapshot after change
* @property {Date} createdAt
*/
export const qsoChanges = sqliteTable('qso_changes', {
id: integer('id').primaryKey({ autoIncrement: true }),
jobId: integer('job_id').notNull().references(() => syncJobs.id),
qsoId: integer('qso_id').references(() => qsos.id), // null for added QSOs until created
changeType: text('change_type').notNull(), // 'added' or 'updated'
beforeData: text('before_data'), // JSON snapshot before change
afterData: text('after_data'), // JSON snapshot after change
createdAt: integer('created_at', { mode: 'timestamp' }).notNull().$defaultFn(() => new Date()),
});
/**
* @typedef {Object} AdminAction
* @property {number} id
* @property {number} adminId
* @property {string} actionType
* @property {number|null} targetUserId
* @property {string|null} details
* @property {Date} createdAt
*/
export const adminActions = sqliteTable('admin_actions', {
id: integer('id').primaryKey({ autoIncrement: true }),
adminId: integer('admin_id').notNull().references(() => users.id),
actionType: text('action_type').notNull(), // 'impersonate_start', 'impersonate_stop', 'role_change', 'user_delete', etc.
targetUserId: integer('target_user_id').references(() => users.id),
details: text('details'), // JSON with additional context
createdAt: integer('created_at', { mode: 'timestamp' }).notNull().$defaultFn(() => new Date()),
});
// Export all schemas
export const schema = { users, qsos, awards, awardProgress, syncJobs };
export const schema = { users, qsos, awards, awardProgress, syncJobs, qsoChanges, adminActions };

View File

@@ -1,7 +1,11 @@
import { Elysia, t } from 'elysia';
import { cors } from '@elysiajs/cors';
import { jwt } from '@elysiajs/jwt';
import { JWT_SECRET, logger, LOG_LEVEL } from './config.js';
import { resolve, normalize } from 'path';
import { existsSync } from 'fs';
import { JWT_SECRET, logger, LOG_LEVEL, logToFrontend } from './config.js';
import { getPerformanceSummary, resetPerformanceMetrics } from './services/performance.service.js';
import { getCacheStats } from './services/cache.service.js';
import {
registerUser,
authenticateUser,
@@ -9,16 +13,29 @@ import {
updateLoTWCredentials,
updateDCLCredentials,
} from './services/auth.service.js';
import {
getSystemStats,
getUserStats,
impersonateUser,
verifyImpersonation,
stopImpersonation,
getImpersonationStatus,
changeUserRole,
deleteUser,
} from './services/admin.service.js';
import { getAllUsers } from './services/auth.service.js';
import {
getUserQSOs,
getQSOStats,
deleteQSOs,
getQSOById,
} from './services/lotw.service.js';
import {
enqueueJob,
getJobStatus,
getUserActiveJob,
getUserJobs,
cancelJob,
} from './services/job-queue.service.js';
import {
getAllAwards,
@@ -31,12 +48,116 @@ import {
* Serves API routes and static frontend files
*/
// Get allowed origins from environment or allow all in development
// SECURITY: Stricter email validation
// Elysia's built-in email format check is lenient; this provides better validation
// Allows: alphanumeric, dots, hyphens, underscores in local part
// Requires: valid domain with at least one dot
const EMAIL_REGEX = /^[a-zA-Z0-9._-]+@[a-zA-Z0-9.-]+\.[a-zA-Z]{2,}$/;
function isValidEmail(email) {
// Basic format check with regex
if (!EMAIL_REGEX.test(email)) {
return false;
}
// Additional checks
const [local, domain] = email.split('@');
// Local part shouldn't start or end with dot
if (local.startsWith('.') || local.endsWith('.')) {
return false;
}
// Local part shouldn't have consecutive dots
if (local.includes('..')) {
return false;
}
// Domain must have at least one dot and valid TLD
if (!domain || !domain.includes('.') || domain.split('.').pop().length < 2) {
return false;
}
// Length checks (RFC 5321 limits)
if (email.length > 254 || local.length > 64) {
return false;
}
return true;
}
// SECURITY: Validate job ID parameter
// Prevents injection and DoS via extremely large numbers
function isValidJobId(jobId) {
const id = parseInt(jobId, 10);
// Check: must be a number, positive, and within reasonable range (1 to 2^31-1)
return !isNaN(id) && id > 0 && id <= 2147483647 && Number.isSafeInteger(id);
}
// SECURITY: Validate QSO ID parameter
function isValidQsoId(qsoId) {
const id = parseInt(qsoId, 10);
return !isNaN(id) && id > 0 && id <= 2147483647 && Number.isSafeInteger(id);
}
// SECURITY: In-memory rate limiter for auth endpoints
// Prevents brute force attacks on login/register
const rateLimitStore = new Map(); // IP -> { count, resetTime }
function checkRateLimit(ip, limit = 5, windowMs = 60000) {
const now = Date.now();
const record = rateLimitStore.get(ip);
// Clean up expired records
if (record && now > record.resetTime) {
rateLimitStore.delete(ip);
return { allowed: true, remaining: limit - 1 };
}
// Check if limit exceeded
if (record) {
if (record.count >= limit) {
const retryAfter = Math.ceil((record.resetTime - now) / 1000);
return { allowed: false, retryAfter };
}
record.count++;
return { allowed: true, remaining: limit - record.count };
}
// Create new record
rateLimitStore.set(ip, {
count: 1,
resetTime: now + windowMs,
});
return { allowed: true, remaining: limit - 1 };
}
// SECURITY: Validate that a path doesn't escape the allowed directory
// Prevents path traversal attacks like ../../../etc/passwd
const BUILD_DIR = resolve('src/frontend/build');
function isPathSafe(requestedPath) {
try {
// Normalize the path to resolve any ../ sequences
const normalized = normalize(requestedPath);
const fullPath = resolve(BUILD_DIR, normalized);
// Check if the resolved path is within the build directory
return fullPath.startsWith(BUILD_DIR);
} catch {
return false;
}
}
// Get allowed origins from environment
// SECURITY: Never allow all origins, even in development
// Always require explicit ALLOWED_ORIGINS or VITE_APP_URL
const ALLOWED_ORIGINS = process.env.ALLOWED_ORIGINS
? process.env.ALLOWED_ORIGINS.split(',')
: process.env.NODE_ENV === 'production'
? [process.env.VITE_APP_URL || 'https://awards.dj7nt.de']
: true; // Allow all in development
: [process.env.VITE_APP_URL || 'http://localhost:5173']; // Default dev origin
const app = new Elysia()
// Enable CORS for frontend communication
@@ -66,25 +187,133 @@ const app = new Elysia()
return { user: null };
}
// Check if this is an impersonation token
const isImpersonation = !!payload.impersonatedBy;
return {
user: {
id: payload.userId,
email: payload.email,
callsign: payload.callsign,
isAdmin: payload.isAdmin,
impersonatedBy: payload.impersonatedBy, // Admin ID if impersonating
},
isImpersonation,
};
} catch (error) {
return { user: null };
}
})
// Security headers middleware
.onAfterHandle(({ set, request }) => {
const url = new URL(request.url);
const isProduction = process.env.NODE_ENV === 'production';
// Prevent clickjacking
set.headers['X-Frame-Options'] = 'DENY';
// Prevent MIME type sniffing
set.headers['X-Content-Type-Options'] = 'nosniff';
// Enable XSS filter
set.headers['X-XSS-Protection'] = '1; mode=block';
// Referrer policy
set.headers['Referrer-Policy'] = 'strict-origin-when-cross-origin';
// Content Security Policy (basic protection)
// In development, allow any origin; in production, restrict
const appUrl = process.env.VITE_APP_URL || 'https://awards.dj7nt.de';
const connectSrc = isProduction ? `'self' ${appUrl}` : "'self' *";
set.headers['Content-Security-Policy'] = [
"default-src 'self'",
`script-src 'self' 'unsafe-inline'`, // unsafe-inline needed for SvelteKit inline scripts
`style-src 'self' 'unsafe-inline'`, // unsafe-inline needed for SvelteKit inline styles
`img-src 'self' data: https:`,
`connect-src ${connectSrc}`,
"font-src 'self'",
"object-src 'none'",
"base-uri 'self'",
"form-action 'self'",
].join('; ');
// Strict Transport Security (HTTPS only) - only in production
if (isProduction) {
set.headers['Strict-Transport-Security'] = 'max-age=31536000; includeSubDomains';
}
})
// Request logging middleware
.onRequest(({ request, params }) => {
const url = new URL(request.url);
const method = request.method;
const path = url.pathname;
const query = url.search;
// Skip logging for health checks in development to reduce noise
if (path === '/api/health' && process.env.NODE_ENV === 'development') {
return;
}
logger.info('Incoming request', {
method,
path,
query: query || undefined,
ip: request.headers.get('x-forwarded-for') || 'unknown',
userAgent: request.headers.get('user-agent')?.substring(0, 100),
});
})
.onAfterHandle(({ request, set }) => {
const url = new URL(request.url);
const path = url.pathname;
// Skip logging for health checks in development
if (path === '/api/health' && process.env.NODE_ENV === 'development') {
return;
}
// Log error responses
if (set.status >= 400) {
logger.warn('Request failed', {
path,
status: set.status,
});
}
})
/**
* POST /api/auth/register
* Register a new user
*/
.post(
'/api/auth/register',
async ({ body, jwt, set }) => {
async ({ body, jwt, set, request }) => {
// SECURITY: Rate limiting to prevent abuse
const ip = request.headers.get('x-forwarded-for') || 'unknown';
const rateLimit = checkRateLimit(ip, 5, 60000); // 5 requests per minute
if (!rateLimit.allowed) {
set.status = 429;
set.headers['Retry-After'] = rateLimit.retryAfter.toString();
return {
success: false,
error: 'Too many registration attempts. Please try again later.',
};
}
set.headers['X-RateLimit-Remaining'] = rateLimit.remaining.toString();
// SECURITY: Additional server-side email validation
if (!isValidEmail(body.email)) {
set.status = 400;
return {
success: false,
error: 'Invalid email address format',
};
}
// Create user
const user = await registerUser(body);
@@ -96,11 +325,13 @@ const app = new Elysia()
};
}
// Generate JWT token
// Generate JWT token with expiration (24 hours)
const exp = Math.floor(Date.now() / 1000) + (24 * 60 * 60); // 24 hours from now
const token = await jwt.sign({
userId: user.id,
email: user.email,
callsign: user.callsign,
exp,
});
set.status = 201;
@@ -135,7 +366,22 @@ const app = new Elysia()
*/
.post(
'/api/auth/login',
async ({ body, jwt, set }) => {
async ({ body, jwt, set, request }) => {
// SECURITY: Rate limiting to prevent brute force attacks
const ip = request.headers.get('x-forwarded-for') || 'unknown';
const rateLimit = checkRateLimit(ip, 10, 60000); // 10 requests per minute (more lenient than register)
if (!rateLimit.allowed) {
set.status = 429;
set.headers['Retry-After'] = rateLimit.retryAfter.toString();
return {
success: false,
error: 'Too many login attempts. Please try again later.',
};
}
set.headers['X-RateLimit-Remaining'] = rateLimit.remaining.toString();
// Authenticate user
const user = await authenticateUser(body.email, body.password);
@@ -147,11 +393,14 @@ const app = new Elysia()
};
}
// Generate JWT token
// Generate JWT token with expiration (24 hours)
const exp = Math.floor(Date.now() / 1000) + (24 * 60 * 60); // 24 hours from now
const token = await jwt.sign({
userId: user.id,
email: user.email,
callsign: user.callsign,
isAdmin: user.isAdmin,
exp,
});
return {
@@ -349,11 +598,12 @@ const app = new Elysia()
}
try {
const jobId = parseInt(params.jobId);
if (isNaN(jobId)) {
// SECURITY: Validate job ID to prevent injection and DoS
if (!isValidJobId(params.jobId)) {
set.status = 400;
return { success: false, error: 'Invalid job ID' };
}
const jobId = parseInt(params.jobId, 10);
const job = await getJobStatus(jobId);
if (!job) {
@@ -446,10 +696,48 @@ const app = new Elysia()
}
})
/**
* DELETE /api/jobs/:jobId
* Cancel and rollback a sync job (requires authentication)
* Only allows cancelling failed, completed, or stale running jobs (>1 hour)
*/
.delete('/api/jobs/:jobId', async ({ user, params, set }) => {
if (!user) {
set.status = 401;
return { success: false, error: 'Unauthorized' };
}
try {
// SECURITY: Validate job ID to prevent injection and DoS
if (!isValidJobId(params.jobId)) {
set.status = 400;
return { success: false, error: 'Invalid job ID' };
}
const jobId = parseInt(params.jobId, 10);
const result = await cancelJob(jobId, user.id);
if (!result.success) {
set.status = 400;
return result;
}
return result;
} catch (error) {
logger.error('Error cancelling job', { error: error.message, userId: user?.id, jobId: params.jobId });
set.status = 500;
return {
success: false,
error: 'Failed to cancel job',
};
}
})
/**
* GET /api/qsos
* Get user's QSOs (requires authentication)
* Supports pagination: ?page=1&limit=100
* Supports filters: band, mode, confirmed, confirmationType (all, lotw, dcl, both, none), search
*/
.get('/api/qsos', async ({ user, query, set }) => {
if (!user) {
@@ -462,6 +750,10 @@ const app = new Elysia()
if (query.band) filters.band = query.band;
if (query.mode) filters.mode = query.mode;
if (query.confirmed) filters.confirmed = query.confirmed === 'true';
if (query.confirmationType && query.confirmationType !== 'all') {
filters.confirmationType = query.confirmationType;
}
if (query.search) filters.search = query.search;
// Pagination options
const options = {
@@ -484,6 +776,45 @@ const app = new Elysia()
}
})
/**
* GET /api/qsos/:id
* Get a single QSO by ID (requires authentication)
*/
.get('/api/qsos/:id', async ({ user, params, set }) => {
if (!user) {
set.status = 401;
return { success: false, error: 'Unauthorized' };
}
try {
// SECURITY: Validate QSO ID to prevent injection and DoS
if (!isValidQsoId(params.id)) {
set.status = 400;
return { success: false, error: 'Invalid QSO ID' };
}
const qsoId = parseInt(params.id, 10);
const qso = await getQSOById(user.id, qsoId);
if (!qso) {
set.status = 404;
return { success: false, error: 'QSO not found' };
}
return {
success: true,
qso,
};
} catch (error) {
logger.error('Failed to fetch QSO by ID', { error: error.message, userId: user?.id, qsoId: params.id });
set.status = 500;
return {
success: false,
error: 'Failed to fetch QSO',
};
}
})
/**
* GET /api/qsos/stats
* Get QSO statistics (requires authentication)
@@ -591,6 +922,44 @@ const app = new Elysia()
};
})
/**
* GET /api/awards/batch/progress
* Get progress for ALL awards in a single request (fixes N+1 query problem)
*/
.get('/api/awards/batch/progress', async ({ user, set }) => {
if (!user) {
set.status = 401;
return { success: false, error: 'Unauthorized' };
}
try {
const awards = await getAllAwards();
// Calculate all awards in parallel
const progressMap = await Promise.all(
awards.map(async (award) => {
const progress = await getAwardProgressDetails(user.id, award.id);
return {
awardId: award.id,
...progress,
};
})
);
return {
success: true,
awards: progressMap,
};
} catch (error) {
logger.error('Failed to fetch batch award progress', { error: error.message, userId: user.id });
set.status = 500;
return {
success: false,
error: 'Failed to fetch award progress',
};
}
})
/**
* GET /api/awards/:awardId/entities
* Get detailed entity breakdown for an award (requires authentication)
@@ -622,8 +991,405 @@ const app = new Elysia()
.get('/api/health', () => ({
status: 'ok',
timestamp: new Date().toISOString(),
uptime: process.uptime(),
performance: getPerformanceSummary(),
cache: getCacheStats()
}))
/**
* POST /api/logs
* Receive frontend logs and write them to frontend.log file
*/
.post(
'/api/logs',
async ({ body, user, headers }) => {
// Extract context from headers (Elysia provides headers as a plain object)
const userAgent = headers['user-agent'] || 'unknown';
const context = {
userId: user?.id,
userAgent,
timestamp: new Date().toISOString(),
};
// Log each entry
const entries = Array.isArray(body) ? body : [body];
for (const entry of entries) {
logToFrontend(entry.level || 'info', entry.message || 'No message', entry.data || null, context);
}
return { success: true };
},
{
body: t.Union([
t.Object({
level: t.Optional(t.Union([t.Literal('debug'), t.Literal('info'), t.Literal('warn'), t.Literal('error')])),
message: t.String(),
data: t.Optional(t.Any()),
}),
t.Array(
t.Object({
level: t.Optional(t.Union([t.Literal('debug'), t.Literal('info'), t.Literal('warn'), t.Literal('error')])),
message: t.String(),
data: t.Optional(t.Any()),
})
),
]),
}
)
/**
* ================================================================
* ADMIN ROUTES
* ================================================================
* All admin routes require authentication and admin role
*/
/**
* GET /api/admin/stats
* Get system-wide statistics (admin only)
*/
.get('/api/admin/stats', async ({ user, set }) => {
if (!user || !user.isAdmin) {
set.status = !user ? 401 : 403;
return { success: false, error: !user ? 'Unauthorized' : 'Admin access required' };
}
try {
const stats = await getSystemStats();
return {
success: true,
stats,
};
} catch (error) {
logger.error('Error fetching system stats', { error: error.message, userId: user.id });
set.status = 500;
return {
success: false,
error: 'Failed to fetch system statistics',
};
}
})
/**
* GET /api/admin/users
* Get all users with statistics (admin only)
*/
.get('/api/admin/users', async ({ user, set }) => {
if (!user || !user.isAdmin) {
set.status = !user ? 401 : 403;
return { success: false, error: !user ? 'Unauthorized' : 'Admin access required' };
}
try {
const users = await getUserStats();
return {
success: true,
users,
};
} catch (error) {
logger.error('Error fetching users', { error: error.message, userId: user.id });
set.status = 500;
return {
success: false,
error: 'Failed to fetch users',
};
}
})
/**
* GET /api/admin/users/:userId
* Get detailed information about a specific user (admin only)
*/
.get('/api/admin/users/:userId', async ({ user, params, set }) => {
if (!user || !user.isAdmin) {
set.status = !user ? 401 : 403;
return { success: false, error: !user ? 'Unauthorized' : 'Admin access required' };
}
const userId = parseInt(params.userId, 10);
if (isNaN(userId) || userId <= 0) {
set.status = 400;
return { success: false, error: 'Invalid user ID' };
}
try {
const targetUser = await getAllUsers();
const userDetails = targetUser.find(u => u.id === userId);
if (!userDetails) {
set.status = 404;
return { success: false, error: 'User not found' };
}
return {
success: true,
user: userDetails,
};
} catch (error) {
logger.error('Error fetching user details', { error: error.message, userId: user.id });
set.status = 500;
return {
success: false,
error: 'Failed to fetch user details',
};
}
})
/**
* POST /api/admin/users/:userId/role
* Update user admin status (admin only)
*/
.post('/api/admin/users/:userId/role', async ({ user, params, body, set }) => {
if (!user || !user.isAdmin) {
set.status = !user ? 401 : 403;
return { success: false, error: !user ? 'Unauthorized' : 'Admin access required' };
}
const targetUserId = parseInt(params.userId, 10);
if (isNaN(targetUserId) || targetUserId <= 0) {
set.status = 400;
return { success: false, error: 'Invalid user ID' };
}
const { isAdmin } = body;
if (typeof isAdmin !== 'boolean') {
set.status = 400;
return { success: false, error: 'isAdmin (boolean) is required' };
}
try {
await changeUserRole(user.id, targetUserId, isAdmin);
return {
success: true,
message: 'User admin status updated successfully',
};
} catch (error) {
logger.error('Error updating user admin status', { error: error.message, userId: user.id });
set.status = 400;
return {
success: false,
error: error.message,
};
}
})
/**
* DELETE /api/admin/users/:userId
* Delete a user (admin only)
*/
.delete('/api/admin/users/:userId', async ({ user, params, set }) => {
if (!user || !user.isAdmin) {
set.status = !user ? 401 : 403;
return { success: false, error: !user ? 'Unauthorized' : 'Admin access required' };
}
const targetUserId = parseInt(params.userId, 10);
if (isNaN(targetUserId) || targetUserId <= 0) {
set.status = 400;
return { success: false, error: 'Invalid user ID' };
}
try {
await deleteUser(user.id, targetUserId);
return {
success: true,
message: 'User deleted successfully',
};
} catch (error) {
logger.error('Error deleting user', { error: error.message, userId: user.id });
set.status = 400;
return {
success: false,
error: error.message,
};
}
})
/**
* POST /api/admin/impersonate/:userId
* Start impersonating a user (admin only)
*/
.post('/api/admin/impersonate/:userId', async ({ user, params, jwt, set }) => {
if (!user || !user.isAdmin) {
set.status = !user ? 401 : 403;
return { success: false, error: !user ? 'Unauthorized' : 'Admin access required' };
}
const targetUserId = parseInt(params.userId, 10);
if (isNaN(targetUserId) || targetUserId <= 0) {
set.status = 400;
return { success: false, error: 'Invalid user ID' };
}
try {
const targetUser = await impersonateUser(user.id, targetUserId);
// Generate impersonation token with shorter expiration (1 hour)
const exp = Math.floor(Date.now() / 1000) + (60 * 60); // 1 hour from now
const token = await jwt.sign({
userId: targetUser.id,
email: targetUser.email,
callsign: targetUser.callsign,
isAdmin: targetUser.isAdmin,
impersonatedBy: user.id, // Admin ID who started impersonation
exp,
});
return {
success: true,
token,
impersonating: {
userId: targetUser.id,
email: targetUser.email,
callsign: targetUser.callsign,
},
message: `Impersonating ${targetUser.email}`,
};
} catch (error) {
logger.error('Error starting impersonation', { error: error.message, userId: user.id });
set.status = 400;
return {
success: false,
error: error.message,
};
}
})
/**
* POST /api/admin/impersonate/stop
* Stop impersonating and return to admin account (admin only)
*/
.post('/api/admin/impersonate/stop', async ({ user, jwt, body, set }) => {
if (!user || !user.impersonatedBy) {
set.status = 400;
return {
success: false,
error: 'Not currently impersonating a user',
};
}
try {
// Log impersonation stop
await stopImpersonation(user.impersonatedBy, user.id);
// Get admin user details to generate new token
const adminUsers = await getAllUsers();
const adminUser = adminUsers.find(u => u.id === user.impersonatedBy);
if (!adminUser) {
set.status = 500;
return {
success: false,
error: 'Admin account not found',
};
}
// Generate new admin token (24 hours)
const exp = Math.floor(Date.now() / 1000) + (24 * 60 * 60);
const token = await jwt.sign({
userId: adminUser.id,
email: adminUser.email,
callsign: adminUser.callsign,
isAdmin: adminUser.isAdmin,
exp,
});
return {
success: true,
token,
user: adminUser,
message: 'Impersonation stopped. Returned to admin account.',
};
} catch (error) {
logger.error('Error stopping impersonation', { error: error.message });
set.status = 500;
return {
success: false,
error: 'Failed to stop impersonation',
};
}
})
/**
* GET /api/admin/impersonation/status
* Get current impersonation status
*/
.get('/api/admin/impersonation/status', async ({ user }) => {
if (!user) {
return {
success: true,
impersonating: false,
};
}
const isImpersonating = !!user.impersonatedBy;
return {
success: true,
impersonating: isImpersonating,
impersonatedBy: user.impersonatedBy,
};
})
/**
* GET /api/admin/actions
* Get admin actions log (admin only)
*/
.get('/api/admin/actions', async ({ user, set, query }) => {
if (!user || !user.isAdmin) {
set.status = !user ? 401 : 403;
return { success: false, error: !user ? 'Unauthorized' : 'Admin access required' };
}
const limit = parseInt(query.limit || '50', 10);
const offset = parseInt(query.offset || '0', 10);
try {
const actions = await getAdminActions(null, { limit, offset });
return {
success: true,
actions,
};
} catch (error) {
logger.error('Error fetching admin actions', { error: error.message, userId: user.id });
set.status = 500;
return {
success: false,
error: 'Failed to fetch admin actions',
};
}
})
/**
* GET /api/admin/actions/my
* Get current admin's action log (admin only)
*/
.get('/api/admin/actions/my', async ({ user, set, query }) => {
if (!user || !user.isAdmin) {
set.status = !user ? 401 : 403;
return { success: false, error: !user ? 'Unauthorized' : 'Admin access required' };
}
const limit = parseInt(query.limit || '50', 10);
const offset = parseInt(query.offset || '0', 10);
try {
const actions = await getAdminActions(user.id, { limit, offset });
return {
success: true,
actions,
};
} catch (error) {
logger.error('Error fetching admin actions', { error: error.message, userId: user.id });
set.status = 500;
return {
success: false,
error: 'Failed to fetch admin actions',
};
}
})
// Serve static files and SPA fallback for all non-API routes
.get('/*', ({ request }) => {
const url = new URL(request.url);
@@ -646,28 +1412,35 @@ const app = new Elysia()
// Extract the actual file path after %sveltekit.assets%/
const assetPath = pathname.replace('/%sveltekit.assets%/', '');
try {
// Try to serve from assets directory first
const assetsPath = `src/frontend/build/_app/immutable/assets/${assetPath}`;
const file = Bun.file(assetsPath);
const exists = file.exists();
// SECURITY: Validate path before serving
const assetsDirPath = `_app/immutable/assets/${assetPath}`;
const rootPath = assetPath;
if (exists) {
return new Response(file);
if (isPathSafe(assetsDirPath)) {
try {
const assetsPath = `src/frontend/build/${assetsDirPath}`;
const file = Bun.file(assetsPath);
const exists = file.exists();
if (exists) {
return new Response(file);
}
} catch (err) {
// Fall through to 404
}
} catch (err) {
// Fall through to 404
}
// If not in assets, try root directory
try {
const rootFile = Bun.file(`src/frontend/build/${assetPath}`);
const rootExists = rootFile.exists();
if (rootExists) {
return new Response(rootFile);
// If not in assets, try root directory (with path validation)
if (isPathSafe(rootPath)) {
try {
const rootFile = Bun.file(`src/frontend/build/${rootPath}`);
const rootExists = rootFile.exists();
if (rootExists) {
return new Response(rootFile);
}
} catch (err) {
// Fall through to 404
}
} catch (err) {
// Fall through to 404
}
return new Response('Not found', { status: 404 });
@@ -677,6 +1450,12 @@ const app = new Elysia()
// Remove leading slash for file path
const filePath = pathname === '/' ? '/index.html' : pathname;
// SECURITY: Validate path is within build directory before serving
if (!isPathSafe(filePath.substring(1))) { // Remove leading slash for relative path
logger.warn('Path traversal attempt blocked', { pathname });
return new Response('Not found', { status: 404 });
}
try {
const fullPath = `src/frontend/build${filePath}`;

View File

@@ -0,0 +1,103 @@
/**
* Migration: Add admin functionality to users table and create admin_actions table
*
* This script adds role-based access control (RBAC) for admin functionality:
* - Adds 'role' and 'isAdmin' columns to users table
* - Creates admin_actions table for audit logging
* - Adds indexes for performance
*/
import Database from 'bun:sqlite';
import { join, dirname } from 'path';
import { fileURLToPath } from 'url';
// ES module equivalent of __dirname
const __filename = fileURLToPath(import.meta.url);
const __dirname = dirname(__filename);
const dbPath = join(__dirname, '../award.db');
const sqlite = new Database(dbPath);
async function migrate() {
console.log('Starting migration: Add admin functionality...');
try {
// Check if role column already exists in users table
const columnExists = sqlite.query(`
SELECT COUNT(*) as count
FROM pragma_table_info('users')
WHERE name = 'role'
`).get();
if (columnExists.count > 0) {
console.log('Admin columns already exist in users table. Skipping...');
} else {
// Add role column to users table
sqlite.exec(`
ALTER TABLE users
ADD COLUMN role TEXT NOT NULL DEFAULT 'user'
`);
// Add isAdmin column to users table
sqlite.exec(`
ALTER TABLE users
ADD COLUMN is_admin INTEGER NOT NULL DEFAULT 0
`);
console.log('Added role and isAdmin columns to users table');
}
// Check if admin_actions table already exists
const tableExists = sqlite.query(`
SELECT name FROM sqlite_master
WHERE type='table' AND name='admin_actions'
`).get();
if (tableExists) {
console.log('Table admin_actions already exists. Skipping...');
} else {
// Create admin_actions table
sqlite.exec(`
CREATE TABLE admin_actions (
id INTEGER PRIMARY KEY AUTOINCREMENT,
admin_id INTEGER NOT NULL,
action_type TEXT NOT NULL,
target_user_id INTEGER,
details TEXT,
created_at INTEGER NOT NULL DEFAULT (strftime('%s', 'now') * 1000),
FOREIGN KEY (admin_id) REFERENCES users(id) ON DELETE CASCADE,
FOREIGN KEY (target_user_id) REFERENCES users(id) ON DELETE SET NULL
)
`);
// Create indexes for admin_actions
sqlite.exec(`
CREATE INDEX idx_admin_actions_admin_id ON admin_actions(admin_id)
`);
sqlite.exec(`
CREATE INDEX idx_admin_actions_action_type ON admin_actions(action_type)
`);
sqlite.exec(`
CREATE INDEX idx_admin_actions_created_at ON admin_actions(created_at)
`);
console.log('Created admin_actions table with indexes');
}
console.log('Migration complete! Admin functionality added to database.');
} catch (error) {
console.error('Migration failed:', error);
sqlite.close();
process.exit(1);
}
sqlite.close();
}
// Run migration
migrate().then(() => {
console.log('Migration script completed successfully');
process.exit(0);
});

View File

@@ -0,0 +1,81 @@
/**
* Migration: Add performance indexes for QSO queries
*
* This script creates database indexes to significantly improve query performance
* for filtering, sorting, sync operations, and QSO statistics. Expected impact:
* - 80% faster filter queries
* - 60% faster sync operations
* - 50% faster award calculations
* - 95% faster QSO statistics queries (critical optimization)
*/
import Database from 'bun:sqlite';
import { join } from 'path';
async function migrate() {
console.log('Starting migration: Add performance indexes...');
// Get the directory containing this migration file
const __dirname = new URL('.', import.meta.url).pathname;
const dbPath = join(__dirname, '../award.db');
const sqlite = new Database(dbPath);
try {
// Index 1: Filter queries by band
console.log('Creating index: idx_qsos_user_band');
sqlite.exec(`CREATE INDEX IF NOT EXISTS idx_qsos_user_band ON qsos(user_id, band)`);
// Index 2: Filter queries by mode
console.log('Creating index: idx_qsos_user_mode');
sqlite.exec(`CREATE INDEX IF NOT EXISTS idx_qsos_user_mode ON qsos(user_id, mode)`);
// Index 3: Filter queries by confirmation status
console.log('Creating index: idx_qsos_user_confirmation');
sqlite.exec(`CREATE INDEX IF NOT EXISTS idx_qsos_user_confirmation ON qsos(user_id, lotw_qsl_rstatus, dcl_qsl_rstatus)`);
// Index 4: Sync duplicate detection (CRITICAL - most impactful)
console.log('Creating index: idx_qsos_duplicate_check');
sqlite.exec(`CREATE INDEX IF NOT EXISTS idx_qsos_duplicate_check ON qsos(user_id, callsign, qso_date, time_on, band, mode)`);
// Index 5: Award calculations - LoTW confirmed QSOs
console.log('Creating index: idx_qsos_lotw_confirmed');
sqlite.exec(`CREATE INDEX IF NOT EXISTS idx_qsos_lotw_confirmed ON qsos(user_id, lotw_qsl_rstatus) WHERE lotw_qsl_rstatus = 'Y'`);
// Index 6: Award calculations - DCL confirmed QSOs
console.log('Creating index: idx_qsos_dcl_confirmed');
sqlite.exec(`CREATE INDEX IF NOT EXISTS idx_qsos_dcl_confirmed ON qsos(user_id, dcl_qsl_rstatus) WHERE dcl_qsl_rstatus = 'Y'`);
// Index 7: Date-based sorting
console.log('Creating index: idx_qsos_qso_date');
sqlite.exec(`CREATE INDEX IF NOT EXISTS idx_qsos_qso_date ON qsos(user_id, qso_date DESC)`);
// Index 8: QSO Statistics - Primary user filter (CRITICAL for getQSOStats)
console.log('Creating index: idx_qsos_user_primary');
sqlite.exec(`CREATE INDEX IF NOT EXISTS idx_qsos_user_primary ON qsos(user_id)`);
// Index 9: QSO Statistics - Unique counts (entity, band, mode)
console.log('Creating index: idx_qsos_user_unique_counts');
sqlite.exec(`CREATE INDEX IF NOT EXISTS idx_qsos_user_unique_counts ON qsos(user_id, entity, band, mode)`);
// Index 10: QSO Statistics - Optimized confirmation counting
console.log('Creating index: idx_qsos_stats_confirmation');
sqlite.exec(`CREATE INDEX IF NOT EXISTS idx_qsos_stats_confirmation ON qsos(user_id, lotw_qsl_rstatus, dcl_qsl_rstatus)`);
sqlite.close();
console.log('\nMigration complete! Created 10 performance indexes.');
console.log('\nTo verify indexes were created, run:');
console.log(' sqlite3 award.db ".indexes qsos"');
} catch (error) {
console.error('Migration failed:', error);
process.exit(1);
}
}
// Run migration
migrate().then(() => {
console.log('\nMigration script completed successfully');
process.exit(0);
});

View File

@@ -0,0 +1,74 @@
/**
* Migration: Add qso_changes table for sync job rollback
*
* This script adds the qso_changes table which tracks all QSO modifications
* made by sync jobs, enabling rollback functionality for failed or stale jobs.
*/
import Database from 'bun:sqlite';
import { join, dirname } from 'path';
import { fileURLToPath } from 'url';
// ES module equivalent of __dirname
const __filename = fileURLToPath(import.meta.url);
const __dirname = dirname(__filename);
const dbPath = join(__dirname, '../award.db');
const sqlite = new Database(dbPath);
async function migrate() {
console.log('Starting migration: Add qso_changes table...');
try {
// Check if table already exists
const tableExists = sqlite.query(`
SELECT name FROM sqlite_master
WHERE type='table' AND name='qso_changes'
`).get();
if (tableExists) {
console.log('Table qso_changes already exists. Migration complete.');
sqlite.close();
return;
}
// Create qso_changes table
sqlite.exec(`
CREATE TABLE qso_changes (
id INTEGER PRIMARY KEY AUTOINCREMENT,
job_id INTEGER NOT NULL,
qso_id INTEGER,
change_type TEXT NOT NULL,
before_data TEXT,
after_data TEXT,
created_at INTEGER NOT NULL DEFAULT (strftime('%s', 'now') * 1000),
FOREIGN KEY (job_id) REFERENCES sync_jobs(id) ON DELETE CASCADE,
FOREIGN KEY (qso_id) REFERENCES qsos(id) ON DELETE CASCADE
)
`);
// Create index for faster lookups during rollback
sqlite.exec(`
CREATE INDEX idx_qso_changes_job_id ON qso_changes(job_id)
`);
// Create index for change_type lookups
sqlite.exec(`
CREATE INDEX idx_qso_changes_change_type ON qso_changes(change_type)
`);
console.log('Migration complete! Created qso_changes table with indexes.');
} catch (error) {
console.error('Migration failed:', error);
sqlite.close();
process.exit(1);
}
sqlite.close();
}
// Run migration
migrate().then(() => {
console.log('Migration script completed successfully');
process.exit(0);
});

View File

@@ -0,0 +1,68 @@
/**
* Migration: Revert incorrect Germany entity assignment
*
* This script removes entity data from DCL-only QSOs that were incorrectly
* set to Germany. These QSOs should have empty entity fields since DCL
* doesn't provide DXCC data.
*/
import { db } from '../config.js';
import { qsos } from '../db/schema/index.js';
import { eq, and, sql } from 'drizzle-orm';
async function migrate() {
console.log('Starting migration: Revert incorrect Germany entity assignment...');
try {
// Find all DCL-confirmed QSOs that have entity set to Germany but NO LoTW confirmation
// These were incorrectly set by the previous migration
const dclQSOsIncorrectEntity = await db
.select()
.from(qsos)
.where(
and(
eq(qsos.dclQslRstatus, 'Y'),
sql`${qsos.entity} = 'FEDERAL REPUBLIC OF GERMANY'`,
sql`(${qsos.lotwQslRstatus} IS NULL OR ${qsos.lotwQslRstatus} != 'Y')`
)
);
console.log(`Found ${dclQSOsIncorrectEntity.length} DCL-only QSOs with incorrect Germany entity`);
if (dclQSOsIncorrectEntity.length === 0) {
console.log('No QSOs need reverting. Migration complete.');
return;
}
// Clear entity data for these QSOs
let updated = 0;
for (const qso of dclQSOsIncorrectEntity) {
await db
.update(qsos)
.set({
entity: '',
entityId: null,
continent: '',
cqZone: null,
ituZone: null,
})
.where(eq(qsos.id, qso.id));
updated++;
if (updated % 100 === 0) {
console.log(`Reverted ${updated}/${dclQSOsIncorrectEntity.length} QSOs...`);
}
}
console.log(`Migration complete! Reverted ${updated} QSOs to empty entity data.`);
} catch (error) {
console.error('Migration failed:', error);
process.exit(1);
}
}
// Run migration
migrate().then(() => {
console.log('Migration script completed successfully');
process.exit(0);
});

View File

@@ -0,0 +1,58 @@
/**
* Rollback: Remove performance indexes
*
* This script removes the performance indexes created by add-performance-indexes.js
* Use this if you need to drop the indexes for any reason.
*/
import Database from 'bun:sqlite';
import { join } from 'path';
async function rollback() {
console.log('Starting rollback: Remove performance indexes...');
// Get the directory containing this migration file
const __dirname = new URL('.', import.meta.url).pathname;
const dbPath = join(__dirname, '../award.db');
const sqlite = new Database(dbPath);
try {
console.log('Dropping index: idx_qsos_user_band');
sqlite.exec(`DROP INDEX IF EXISTS idx_qsos_user_band`);
console.log('Dropping index: idx_qsos_user_mode');
sqlite.exec(`DROP INDEX IF EXISTS idx_qsos_user_mode`);
console.log('Dropping index: idx_qsos_user_confirmation');
sqlite.exec(`DROP INDEX IF EXISTS idx_qsos_user_confirmation`);
console.log('Dropping index: idx_qsos_duplicate_check');
sqlite.exec(`DROP INDEX IF EXISTS idx_qsos_duplicate_check`);
console.log('Dropping index: idx_qsos_lotw_confirmed');
sqlite.exec(`DROP INDEX IF EXISTS idx_qsos_lotw_confirmed`);
console.log('Dropping index: idx_qsos_dcl_confirmed');
sqlite.exec(`DROP INDEX IF EXISTS idx_qsos_dcl_confirmed`);
console.log('Dropping index: idx_qsos_qso_date');
sqlite.exec(`DROP INDEX IF EXISTS idx_qsos_qso_date`);
sqlite.close();
console.log('\nRollback complete! Removed 7 performance indexes.');
console.log('\nTo verify indexes were dropped, run:');
console.log(' sqlite3 award.db ".indexes qsos"');
} catch (error) {
console.error('Rollback failed:', error);
process.exit(1);
}
}
// Run rollback
rollback().then(() => {
console.log('\nRollback script completed successfully');
process.exit(0);
});

View File

@@ -0,0 +1,251 @@
#!/usr/bin/env bun
/**
* Admin CLI Tool
*
* Usage:
* bun src/backend/scripts/admin-cli.js create <email> <password> <callsign>
* bun src/backend/scripts/admin-cli.js promote <email>
* bun src/backend/scripts/admin-cli.js demote <email>
* bun src/backend/scripts/admin-cli.js list
* bun src/backend/scripts/admin-cli.js check <email>
* bun src/backend/scripts/admin-cli.js help
*/
import Database from 'bun:sqlite';
import { join, dirname } from 'path';
import { fileURLToPath } from 'url';
// ES module equivalent of __dirname
const __filename = fileURLToPath(import.meta.url);
const __dirname = dirname(__filename);
const dbPath = join(__dirname, '../award.db');
const sqlite = new Database(dbPath);
// Enable foreign keys
sqlite.exec('PRAGMA foreign_keys = ON');
function help() {
console.log(`
Admin CLI Tool - Manage admin users
Commands:
create <email> <password> <callsign> Create a new admin user
promote <email> Promote existing user to admin
demote <email> Demote admin to regular user
list List all admin users
check <email> Check if user is admin
help Show this help message
Examples:
bun src/backend/scripts/admin-cli.js create admin@example.com secretPassword ADMIN
bun src/backend/scripts/admin-cli.js promote user@example.com
bun src/backend/scripts/admin-cli.js list
bun src/backend/scripts/admin-cli.js check user@example.com
`);
}
function createAdminUser(email, password, callsign) {
console.log(`Creating admin user: ${email}`);
// Check if user already exists
const existingUser = sqlite.query(`
SELECT id, email FROM users WHERE email = ?
`).get(email);
if (existingUser) {
console.error(`Error: User with email ${email} already exists`);
process.exit(1);
}
// Hash password
const passwordHash = Bun.password.hashSync(password, {
algorithm: 'bcrypt',
cost: 10,
});
// Ensure passwordHash is a string
const hashString = String(passwordHash);
// Insert admin user
const result = sqlite.query(`
INSERT INTO users (email, password_hash, callsign, is_admin, created_at, updated_at)
VALUES (?, ?, ?, 1, strftime('%s', 'now') * 1000, strftime('%s', 'now') * 1000)
`).run(email, hashString, callsign);
console.log(`✓ Admin user created successfully!`);
console.log(` ID: ${result.lastInsertRowid}`);
console.log(` Email: ${email}`);
console.log(` Callsign: ${callsign}`);
console.log(`\nYou can now log in with these credentials.`);
}
function promoteUser(email) {
console.log(`Promoting user to admin: ${email}`);
// Check if user exists
const user = sqlite.query(`
SELECT id, email, is_admin FROM users WHERE email = ?
`).get(email);
if (!user) {
console.error(`Error: User with email ${email} not found`);
process.exit(1);
}
if (user.is_admin === 1) {
console.log(`User ${email} is already an admin`);
return;
}
// Update user to admin
sqlite.query(`
UPDATE users
SET is_admin = 1, updated_at = strftime('%s', 'now') * 1000
WHERE email = ?
`).run(email);
console.log(`✓ User ${email} has been promoted to admin`);
}
function demoteUser(email) {
console.log(`Demoting admin to regular user: ${email}`);
// Check if user exists
const user = sqlite.query(`
SELECT id, email, is_admin FROM users WHERE email = ?
`).get(email);
if (!user) {
console.error(`Error: User with email ${email} not found`);
process.exit(1);
}
if (user.is_admin !== 1) {
console.log(`User ${email} is not an admin`);
return;
}
// Check if this is the last admin
const adminCount = sqlite.query(`
SELECT COUNT(*) as count FROM users WHERE is_admin = 1
`).get();
if (adminCount.count === 1) {
console.error(`Error: Cannot demote the last admin user. At least one admin must exist.`);
process.exit(1);
}
// Update user to regular user
sqlite.query(`
UPDATE users
SET is_admin = 0, updated_at = strftime('%s', 'now') * 1000
WHERE email = ?
`).run(email);
console.log(`✓ User ${email} has been demoted to regular user`);
}
function listAdmins() {
console.log('Listing all admin users...\n');
const admins = sqlite.query(`
SELECT id, email, callsign, created_at
FROM users
WHERE is_admin = 1
ORDER BY created_at ASC
`).all();
if (admins.length === 0) {
console.log('No admin users found');
return;
}
console.log(`Found ${admins.length} admin user(s):\n`);
console.log('ID | Email | Callsign | Created At');
console.log('----+----------------------------+----------+---------------------');
admins.forEach((admin) => {
const createdAt = new Date(admin.created_at).toLocaleString();
console.log(`${String(admin.id).padEnd(3)} | ${admin.email.padEnd(26)} | ${admin.callsign.padEnd(8)} | ${createdAt}`);
});
}
function checkUser(email) {
console.log(`Checking user status: ${email}\n`);
const user = sqlite.query(`
SELECT id, email, callsign, is_admin FROM users WHERE email = ?
`).get(email);
if (!user) {
console.log(`User not found: ${email}`);
process.exit(1);
}
const isAdmin = user.is_admin === 1;
console.log(`User found:`);
console.log(` Email: ${user.email}`);
console.log(` Callsign: ${user.callsign}`);
console.log(` Is Admin: ${isAdmin ? 'Yes ✓' : 'No'}`);
}
// Main CLI logic
const command = process.argv[2];
const args = process.argv.slice(3);
switch (command) {
case 'create':
if (args.length !== 3) {
console.error('Error: create command requires 3 arguments: <email> <password> <callsign>');
help();
process.exit(1);
}
createAdminUser(args[0], args[1], args[2]);
break;
case 'promote':
if (args.length !== 1) {
console.error('Error: promote command requires 1 argument: <email>');
help();
process.exit(1);
}
promoteUser(args[0]);
break;
case 'demote':
if (args.length !== 1) {
console.error('Error: demote command requires 1 argument: <email>');
help();
process.exit(1);
}
demoteUser(args[0]);
break;
case 'list':
listAdmins();
break;
case 'check':
if (args.length !== 1) {
console.error('Error: check command requires 1 argument: <email>');
help();
process.exit(1);
}
checkUser(args[0]);
break;
case 'help':
case '--help':
case '-h':
help();
break;
default:
console.error(`Error: Unknown command '${command}'`);
help();
process.exit(1);
}
sqlite.close();

View File

@@ -0,0 +1,33 @@
#!/usr/bin/env bun
/**
* Database initialization script
* Creates the database schema using Drizzle ORM
*/
import Database from 'bun:sqlite';
import { drizzle } from 'drizzle-orm/bun-sqlite';
import { migrate } from 'drizzle-orm/bun-sqlite/migrator';
import * as schema from '../db/schema/index.js';
import { join } from 'path';
const dbPath = join(process.cwd(), 'src/backend/award.db');
console.log('Creating database at:', dbPath);
// Create SQLite database
const sqlite = new Database(dbPath);
sqlite.exec('PRAGMA foreign_keys = ON');
const db = drizzle({
client: sqlite,
schema,
});
console.log('Running migrations...');
// Run migrations
await migrate(db, { migrationsFolder: join(process.cwd(), 'drizzle') });
console.log('✅ Database initialized successfully');
sqlite.close();

View File

@@ -0,0 +1,387 @@
import { eq, sql, desc } from 'drizzle-orm';
import { db, sqlite, logger } from '../config.js';
import { users, qsos, syncJobs, adminActions, awardProgress, qsoChanges } from '../db/schema/index.js';
import { getUserByIdFull, isAdmin } from './auth.service.js';
/**
* Log an admin action for audit trail
* @param {number} adminId - Admin user ID
* @param {string} actionType - Type of action (e.g., 'impersonate_start', 'role_change')
* @param {number|null} targetUserId - Target user ID (if applicable)
* @param {Object} details - Additional details (will be JSON stringified)
* @returns {Promise<Object>} Created admin action record
*/
export async function logAdminAction(adminId, actionType, targetUserId = null, details = {}) {
const [action] = await db
.insert(adminActions)
.values({
adminId,
actionType,
targetUserId,
details: JSON.stringify(details),
})
.returning();
return action;
}
/**
* Get admin actions log
* @param {number} adminId - Admin user ID (optional, if null returns all actions)
* @param {Object} options - Query options
* @param {number} options.limit - Number of records to return
* @param {number} options.offset - Number of records to skip
* @returns {Promise<Array>} Array of admin actions
*/
export async function getAdminActions(adminId = null, { limit = 50, offset = 0 } = {}) {
let query = db
.select({
id: adminActions.id,
adminId: adminActions.adminId,
adminEmail: users.email,
adminCallsign: users.callsign,
actionType: adminActions.actionType,
targetUserId: adminActions.targetUserId,
targetEmail: sql`target_users.email`.as('targetEmail'),
targetCallsign: sql`target_users.callsign`.as('targetCallsign'),
details: adminActions.details,
createdAt: adminActions.createdAt,
})
.from(adminActions)
.leftJoin(users, eq(adminActions.adminId, users.id))
.leftJoin(sql`${users} as target_users`, eq(adminActions.targetUserId, sql.raw('target_users.id')))
.orderBy(desc(adminActions.createdAt))
.limit(limit)
.offset(offset);
if (adminId) {
query = query.where(eq(adminActions.adminId, adminId));
}
return await query;
}
/**
* Get system-wide statistics
* @returns {Promise<Object>} System statistics
*/
export async function getSystemStats() {
const [
userStats,
qsoStats,
syncJobStats,
adminStats,
] = await Promise.all([
// User statistics
db.select({
totalUsers: sql`CAST(COUNT(*) AS INTEGER)`,
adminUsers: sql`CAST(SUM(CASE WHEN is_admin = 1 THEN 1 ELSE 0 END) AS INTEGER)`,
regularUsers: sql`CAST(SUM(CASE WHEN is_admin = 0 THEN 1 ELSE 0 END) AS INTEGER)`,
}).from(users),
// QSO statistics
db.select({
totalQSOs: sql`CAST(COUNT(*) AS INTEGER)`,
uniqueCallsigns: sql`CAST(COUNT(DISTINCT callsign) AS INTEGER)`,
uniqueEntities: sql`CAST(COUNT(DISTINCT entity_id) AS INTEGER)`,
lotwConfirmed: sql`CAST(SUM(CASE WHEN lotw_qsl_rstatus = 'Y' THEN 1 ELSE 0 END) AS INTEGER)`,
dclConfirmed: sql`CAST(SUM(CASE WHEN dcl_qsl_rstatus = 'Y' THEN 1 ELSE 0 END) AS INTEGER)`,
}).from(qsos),
// Sync job statistics
db.select({
totalJobs: sql`CAST(COUNT(*) AS INTEGER)`,
lotwJobs: sql`CAST(SUM(CASE WHEN type = 'lotw_sync' THEN 1 ELSE 0 END) AS INTEGER)`,
dclJobs: sql`CAST(SUM(CASE WHEN type = 'dcl_sync' THEN 1 ELSE 0 END) AS INTEGER)`,
completedJobs: sql`CAST(SUM(CASE WHEN status = 'completed' THEN 1 ELSE 0 END) AS INTEGER)`,
failedJobs: sql`CAST(SUM(CASE WHEN status = 'failed' THEN 1 ELSE 0 END) AS INTEGER)`,
}).from(syncJobs),
// Admin action statistics
db.select({
totalAdminActions: sql`CAST(COUNT(*) AS INTEGER)`,
impersonations: sql`CAST(SUM(CASE WHEN action_type LIKE 'impersonate%' THEN 1 ELSE 0 END) AS INTEGER)`,
}).from(adminActions),
]);
return {
users: userStats[0],
qsos: qsoStats[0],
syncJobs: syncJobStats[0],
adminActions: adminStats[0],
};
}
/**
* Get per-user statistics (for admin overview)
* @returns {Promise<Array>} Array of user statistics
*/
export async function getUserStats() {
const stats = await db
.select({
id: users.id,
email: users.email,
callsign: users.callsign,
isAdmin: users.isAdmin,
qsoCount: sql`CAST(COUNT(${qsos.id}) AS INTEGER)`,
lotwConfirmed: sql`CAST(SUM(CASE WHEN ${qsos.lotwQslRstatus} = 'Y' THEN 1 ELSE 0 END) AS INTEGER)`,
dclConfirmed: sql`CAST(SUM(CASE WHEN ${qsos.dclQslRstatus} = 'Y' THEN 1 ELSE 0 END) AS INTEGER)`,
totalConfirmed: sql`CAST(SUM(CASE WHEN ${qsos.lotwQslRstatus} = 'Y' OR ${qsos.dclQslRstatus} = 'Y' THEN 1 ELSE 0 END) AS INTEGER)`,
lastSync: sql`MAX(${qsos.createdAt})`,
createdAt: users.createdAt,
})
.from(users)
.leftJoin(qsos, eq(users.id, qsos.userId))
.groupBy(users.id)
.orderBy(sql`COUNT(${qsos.id}) DESC`);
return stats;
}
/**
* Impersonate a user
* @param {number} adminId - Admin user ID
* @param {number} targetUserId - Target user ID to impersonate
* @returns {Promise<Object>} Target user object
* @throws {Error} If not admin or trying to impersonate another admin
*/
export async function impersonateUser(adminId, targetUserId) {
// Verify the requester is an admin
const requesterIsAdmin = await isAdmin(adminId);
if (!requesterIsAdmin) {
throw new Error('Only admins can impersonate users');
}
// Get target user
const targetUser = await getUserByIdFull(targetUserId);
if (!targetUser) {
throw new Error('Target user not found');
}
// Check if target is also an admin (prevent admin impersonation)
if (targetUser.isAdmin) {
throw new Error('Cannot impersonate another admin user');
}
// Log impersonation action
await logAdminAction(adminId, 'impersonate_start', targetUserId, {
targetEmail: targetUser.email,
targetCallsign: targetUser.callsign,
});
return targetUser;
}
/**
* Verify impersonation token is valid
* @param {Object} impersonationToken - JWT token payload containing impersonation data
* @returns {Promise<Object>} Verification result with target user data
*/
export async function verifyImpersonation(impersonationToken) {
const { adminId, targetUserId, exp } = impersonationToken;
// Check if token is expired
if (Date.now() > exp * 1000) {
throw new Error('Impersonation token has expired');
}
// Verify admin still exists and is admin
const adminUser = await getUserByIdFull(adminId);
if (!adminUser || !adminUser.isAdmin) {
throw new Error('Invalid impersonation: Admin no longer exists or is not admin');
}
// Get target user
const targetUser = await getUserByIdFull(targetUserId);
if (!targetUser) {
throw new Error('Target user not found');
}
// Return target user with admin metadata for frontend display
return {
...targetUser,
impersonating: {
adminId,
adminEmail: adminUser.email,
adminCallsign: adminUser.callsign,
},
};
}
/**
* Stop impersonating a user
* @param {number} adminId - Admin user ID
* @param {number} targetUserId - Target user ID being impersonated
* @returns {Promise<void>}
*/
export async function stopImpersonation(adminId, targetUserId) {
await logAdminAction(adminId, 'impersonate_stop', targetUserId, {
message: 'Impersonation session ended',
});
}
/**
* Get impersonation status for an admin
* @param {number} adminId - Admin user ID
* @param {Object} options - Query options
* @param {number} options.limit - Number of recent impersonations to return
* @returns {Promise<Array>} Array of recent impersonation actions
*/
export async function getImpersonationStatus(adminId, { limit = 10 } = {}) {
const impersonations = await db
.select({
id: adminActions.id,
actionType: adminActions.actionType,
targetUserId: adminActions.targetUserId,
targetEmail: sql`target_users.email`,
targetCallsign: sql`target_users.callsign`,
details: adminActions.details,
createdAt: adminActions.createdAt,
})
.from(adminActions)
.leftJoin(sql`${users} as target_users`, eq(adminActions.targetUserId, sql.raw('target_users.id')))
.where(eq(adminActions.adminId, adminId))
.where(sql`${adminActions.actionType} LIKE 'impersonate%'`)
.orderBy(desc(adminActions.createdAt))
.limit(limit);
return impersonations;
}
/**
* Update user admin status (admin operation)
* @param {number} adminId - Admin user ID making the change
* @param {number} targetUserId - User ID to update
* @param {boolean} newIsAdmin - New admin flag
* @returns {Promise<void>}
* @throws {Error} If not admin or would remove last admin
*/
export async function changeUserRole(adminId, targetUserId, newIsAdmin) {
// Verify the requester is an admin
const requesterIsAdmin = await isAdmin(adminId);
if (!requesterIsAdmin) {
throw new Error('Only admins can change user admin status');
}
// Get target user
const targetUser = await getUserByIdFull(targetUserId);
if (!targetUser) {
throw new Error('Target user not found');
}
// If demoting from admin, check if this would remove the last admin
if (targetUser.isAdmin && !newIsAdmin) {
const adminCount = await db
.select({ count: sql`CAST(COUNT(*) AS INTEGER)` })
.from(users)
.where(eq(users.isAdmin, 1));
if (adminCount[0].count === 1) {
throw new Error('Cannot demote the last admin user');
}
}
// Update admin status
await db
.update(users)
.set({
isAdmin: newIsAdmin ? 1 : 0,
updatedAt: new Date(),
})
.where(eq(users.id, targetUserId));
// Log action
await logAdminAction(adminId, 'role_change', targetUserId, {
oldIsAdmin: targetUser.isAdmin,
newIsAdmin: newIsAdmin,
});
}
/**
* Delete user (admin operation)
* @param {number} adminId - Admin user ID making the change
* @param {number} targetUserId - User ID to delete
* @returns {Promise<void>}
* @throws {Error} If not admin, trying to delete self, or trying to delete another admin
*/
export async function deleteUser(adminId, targetUserId) {
// Verify the requester is an admin
const requesterIsAdmin = await isAdmin(adminId);
if (!requesterIsAdmin) {
throw new Error('Only admins can delete users');
}
// Get target user
const targetUser = await getUserByIdFull(targetUserId);
if (!targetUser) {
throw new Error('Target user not found');
}
// Prevent deleting self
if (adminId === targetUserId) {
throw new Error('Cannot delete your own account');
}
// Prevent deleting other admins
if (targetUser.isAdmin) {
throw new Error('Cannot delete admin users');
}
// Get stats for logging
const [qsoStats] = await db
.select({ count: sql`CAST(COUNT(*) AS INTEGER)` })
.from(qsos)
.where(eq(qsos.userId, targetUserId));
// Delete all related records using Drizzle
// Delete in correct order to satisfy foreign key constraints
logger.info('Attempting to delete user', { userId: targetUserId, adminId });
try {
// 1. Delete qso_changes (references qso_id -> qsos and job_id -> sync_jobs)
// First get user's QSO IDs, then delete qso_changes referencing those QSOs
const userQSOs = await db.select({ id: qsos.id }).from(qsos).where(eq(qsos.userId, targetUserId));
const userQSOIds = userQSOs.map(q => q.id);
if (userQSOIds.length > 0) {
// Use raw SQL to delete qso_changes
sqlite.exec(
`DELETE FROM qso_changes WHERE qso_id IN (${userQSOIds.join(',')})`
);
}
// 2. Delete award_progress
await db.delete(awardProgress).where(eq(awardProgress.userId, targetUserId));
// 3. Delete sync_jobs
await db.delete(syncJobs).where(eq(syncJobs.userId, targetUserId));
// 4. Delete qsos
await db.delete(qsos).where(eq(qsos.userId, targetUserId));
// 5. Delete admin actions where user is target
await db.delete(adminActions).where(eq(adminActions.targetUserId, targetUserId));
// 6. Delete user
await db.delete(users).where(eq(users.id, targetUserId));
// Log action
await logAdminAction(adminId, 'user_delete', targetUserId, {
email: targetUser.email,
callsign: targetUser.callsign,
qsoCountDeleted: qsoStats.count,
});
logger.info('User deleted successfully', { userId: targetUserId, adminId });
} catch (error) {
logger.error('Failed to delete user', { error: error.message, userId: targetUserId });
throw error;
}
// Log action
await logAdminAction(adminId, 'user_delete', targetUserId, {
email: targetUser.email,
callsign: targetUser.callsign,
qsoCountDeleted: qsoStats.count,
});
}

View File

@@ -142,3 +142,97 @@ export async function updateDCLCredentials(userId, dclApiKey) {
})
.where(eq(users.id, userId));
}
/**
* Check if user is admin
* @param {number} userId - User ID
* @returns {Promise<boolean>} True if user is admin
*/
export async function isAdmin(userId) {
const [user] = await db
.select({ isAdmin: users.isAdmin })
.from(users)
.where(eq(users.id, userId))
.limit(1);
return user?.isAdmin === true || user?.isAdmin === 1;
}
/**
* Get all admin users
* @returns {Promise<Array>} Array of admin users (without passwords)
*/
export async function getAdminUsers() {
const adminUsers = await db
.select({
id: users.id,
email: users.email,
callsign: users.callsign,
isAdmin: users.isAdmin,
createdAt: users.createdAt,
})
.from(users)
.where(eq(users.isAdmin, 1));
return adminUsers;
}
/**
* Update user admin status
* @param {number} userId - User ID
* @param {boolean} isAdmin - Admin flag
* @returns {Promise<void>}
*/
export async function updateUserRole(userId, isAdmin) {
await db
.update(users)
.set({
isAdmin: isAdmin ? 1 : 0,
updatedAt: new Date(),
})
.where(eq(users.id, userId));
}
/**
* Get all users (for admin use)
* @returns {Promise<Array>} Array of all users (without passwords)
*/
export async function getAllUsers() {
const allUsers = await db
.select({
id: users.id,
email: users.email,
callsign: users.callsign,
isAdmin: users.isAdmin,
createdAt: users.createdAt,
updatedAt: users.updatedAt,
})
.from(users)
.orderBy(users.createdAt);
return allUsers;
}
/**
* Get user by ID (for admin use)
* @param {number} userId - User ID
* @returns {Promise<Object|null>} Full user object (without password) or null
*/
export async function getUserByIdFull(userId) {
const [user] = await db
.select({
id: users.id,
email: users.email,
callsign: users.callsign,
isAdmin: users.isAdmin,
lotwUsername: users.lotwUsername,
dclApiKey: users.dclApiKey,
createdAt: users.createdAt,
updatedAt: users.updatedAt,
})
.from(users)
.where(eq(users.id, userId))
.limit(1);
return user || null;
}

View File

@@ -3,6 +3,7 @@ import { qsos } from '../db/schema/index.js';
import { eq, and, or, desc, sql } from 'drizzle-orm';
import { readFileSync } from 'fs';
import { join } from 'path';
import { getCachedAwardProgress, setCachedAwardProgress } from './cache.service.js';
/**
* Awards Service
@@ -31,6 +32,7 @@ function loadAwardDefinitions() {
'dld-40m.json',
'dld-cw.json',
'dld-80m-cw.json',
'73-on-73.json',
];
for (const file of files) {
@@ -210,11 +212,13 @@ async function calculateDOKAwardProgress(userId, award, options = {}) {
// Initialize combination if not exists
if (!dokCombinations.has(combinationKey)) {
dokCombinations.set(combinationKey, {
qsoId: qso.id,
entity: dok,
entityId: null,
entityName: dok,
band,
mode,
callsign: qso.callsign,
worked: false,
confirmed: false,
qsoDate: qso.qsoDate,
@@ -335,6 +339,7 @@ async function calculatePointsAwardProgress(userId, award, options = {}) {
if (!combinationMap.has(combinationKey)) {
combinationMap.set(combinationKey, {
qsoId: qso.id,
callsign,
band,
mode,
@@ -373,6 +378,7 @@ async function calculatePointsAwardProgress(userId, award, options = {}) {
if (!stationMap.has(callsign)) {
stationMap.set(callsign, {
qsoId: qso.id,
callsign,
points,
worked: true,
@@ -410,6 +416,7 @@ async function calculatePointsAwardProgress(userId, award, options = {}) {
if (qso.lotwQslRstatus === 'Y') {
totalPoints += points;
stationDetails.push({
qsoId: qso.id,
callsign,
points,
worked: true,
@@ -446,6 +453,7 @@ async function calculatePointsAwardProgress(userId, award, options = {}) {
const entities = stationDetails.map((detail) => {
if (countMode === 'perBandMode') {
return {
qsoId: detail.qsoId,
entity: `${detail.callsign}/${detail.band}/${detail.mode}`,
entityId: null,
entityName: `${detail.callsign} (${detail.band}/${detail.mode})`,
@@ -460,6 +468,7 @@ async function calculatePointsAwardProgress(userId, award, options = {}) {
};
} else if (countMode === 'perStation') {
return {
qsoId: detail.qsoId,
entity: detail.callsign,
entityId: null,
entityName: detail.callsign,
@@ -474,6 +483,7 @@ async function calculatePointsAwardProgress(userId, award, options = {}) {
};
} else {
return {
qsoId: detail.qsoId,
entity: `${detail.callsign}-${detail.qsoDate}`,
entityId: null,
entityName: `${detail.callsign} on ${detail.qsoDate}`,
@@ -577,6 +587,15 @@ function matchesFilter(qso, filter) {
* Get award progress with QSO details
*/
export async function getAwardProgressDetails(userId, awardId) {
// Check cache first
const cached = getCachedAwardProgress(userId, awardId);
if (cached) {
logger.debug(`Cache hit for award ${awardId}, user ${userId}`);
return cached;
}
logger.debug(`Cache miss for award ${awardId}, user ${userId} - calculating...`);
// Get award definition
const definitions = loadAwardDefinitions();
const award = definitions.find((def) => def.id === awardId);
@@ -588,7 +607,7 @@ export async function getAwardProgressDetails(userId, awardId) {
// Calculate progress
const progress = await calculateAwardProgress(userId, award);
return {
const result = {
award: {
id: award.id,
name: award.name,
@@ -598,6 +617,11 @@ export async function getAwardProgressDetails(userId, awardId) {
},
...progress,
};
// Store in cache
setCachedAwardProgress(userId, awardId, result);
return result;
}
/**
@@ -673,6 +697,7 @@ export async function getAwardEntityBreakdown(userId, awardId) {
}
entityMap.set(entity, {
qsoId: qso.id,
entity,
entityId: qso.entityId,
entityName: displayName,

View File

@@ -0,0 +1,275 @@
/**
* Cache Service for Award Progress
*
* Provides in-memory caching for award progress calculations to avoid
* expensive database aggregations on every request.
*
* Cache TTL: 5 minutes (balances freshness with performance)
*
* Usage:
* - Check cache before calculating award progress
* - Invalidate cache when QSOs are synced/updated
* - Automatic expiry after TTL
*/
const awardCache = new Map();
const statsCache = new Map();
const CACHE_TTL = 5 * 60 * 1000; // 5 minutes
/**
* Get cached award progress if available and not expired
* @param {number} userId - User ID
* @param {string} awardId - Award ID
* @returns {object|null} Cached progress data or null if not found/expired
*/
export function getCachedAwardProgress(userId, awardId) {
const key = `${userId}:${awardId}`;
const cached = awardCache.get(key);
if (!cached) {
recordAwardCacheMiss();
return null;
}
// Check if cache has expired
const age = Date.now() - cached.timestamp;
if (age > CACHE_TTL) {
awardCache.delete(key);
recordAwardCacheMiss();
return null;
}
recordAwardCacheHit();
return cached.data;
}
/**
* Set award progress in cache
* @param {number} userId - User ID
* @param {string} awardId - Award ID
* @param {object} data - Award progress data to cache
*/
export function setCachedAwardProgress(userId, awardId, data) {
const key = `${userId}:${awardId}`;
awardCache.set(key, {
data,
timestamp: Date.now()
});
}
/**
* Invalidate all cached awards for a specific user
* Call this after syncing or updating QSOs
* @param {number} userId - User ID
*/
export function invalidateUserCache(userId) {
const prefix = `${userId}:`;
let deleted = 0;
for (const [key] of awardCache) {
if (key.startsWith(prefix)) {
awardCache.delete(key);
deleted++;
}
}
return deleted;
}
/**
* Clear all cached awards (use sparingly)
*主要用于测试或紧急情况
*/
export function clearAllCache() {
const size = awardCache.size;
awardCache.clear();
return size;
}
/**
* Get cache statistics (for monitoring/debugging)
* @returns {object} Cache stats
*/
export function getCacheStats() {
const now = Date.now();
let expired = 0;
let valid = 0;
for (const [, value] of awardCache) {
const age = now - value.timestamp;
if (age > CACHE_TTL) {
expired++;
} else {
valid++;
}
}
return {
total: awardCache.size,
valid,
expired,
ttl: CACHE_TTL
};
}
/**
* Clean up expired cache entries (maintenance function)
* Can be called periodically to free memory
* @returns {number} Number of entries cleaned up
*/
export function cleanupExpiredCache() {
const now = Date.now();
let cleaned = 0;
for (const [key, value] of awardCache) {
const age = now - value.timestamp;
if (age > CACHE_TTL) {
awardCache.delete(key);
cleaned++;
}
}
for (const [key, value] of statsCache) {
const age = now - value.timestamp;
if (age > CACHE_TTL) {
statsCache.delete(key);
cleaned++;
}
}
return cleaned;
}
/**
* Get cached QSO statistics if available and not expired
* @param {number} userId - User ID
* @returns {object|null} Cached stats data or null if not found/expired
*/
export function getCachedStats(userId) {
const key = `stats_${userId}`;
const cached = statsCache.get(key);
if (!cached) {
recordStatsCacheMiss();
return null;
}
// Check if cache has expired
const age = Date.now() - cached.timestamp;
if (age > CACHE_TTL) {
statsCache.delete(key);
recordStatsCacheMiss();
return null;
}
recordStatsCacheHit();
return cached.data;
}
/**
* Set QSO statistics in cache
* @param {number} userId - User ID
* @param {object} data - Statistics data to cache
*/
export function setCachedStats(userId, data) {
const key = `stats_${userId}`;
statsCache.set(key, {
data,
timestamp: Date.now()
});
}
/**
* Invalidate cached QSO statistics for a specific user
* Call this after syncing or updating QSOs
* @param {number} userId - User ID
* @returns {boolean} True if cache was invalidated
*/
export function invalidateStatsCache(userId) {
const key = `stats_${userId}`;
const deleted = statsCache.delete(key);
return deleted;
}
/**
* Get cache statistics including both award and stats caches
* @returns {object} Cache stats
*/
export function getCacheStats() {
const now = Date.now();
let expired = 0;
let valid = 0;
for (const [, value] of awardCache) {
const age = now - value.timestamp;
if (age > CACHE_TTL) {
expired++;
} else {
valid++;
}
}
for (const [, value] of statsCache) {
const age = now - value.timestamp;
if (age > CACHE_TTL) {
expired++;
} else {
valid++;
}
}
const totalRequests = awardCacheStats.hits + awardCacheStats.misses + statsCacheStats.hits + statsCacheStats.misses;
const hitRate = totalRequests > 0 ? ((awardCacheStats.hits + statsCacheStats.hits) / totalRequests * 100).toFixed(2) + '%' : '0%';
return {
total: awardCache.size + statsCache.size,
valid,
expired,
ttl: CACHE_TTL,
hitRate,
awardCache: {
size: awardCache.size,
hits: awardCacheStats.hits,
misses: awardCacheStats.misses
},
statsCache: {
size: statsCache.size,
hits: statsCacheStats.hits,
misses: statsCacheStats.misses
}
};
}
/**
* Cache statistics tracking
*/
const awardCacheStats = { hits: 0, misses: 0 };
const statsCacheStats = { hits: 0, misses: 0 };
/**
* Record a cache hit for awards
*/
export function recordAwardCacheHit() {
awardCacheStats.hits++;
}
/**
* Record a cache miss for awards
*/
export function recordAwardCacheMiss() {
awardCacheStats.misses++;
}
/**
* Record a cache hit for stats
*/
export function recordStatsCacheHit() {
statsCacheStats.hits++;
}
/**
* Record a cache miss for stats
*/
export function recordStatsCacheMiss() {
statsCacheStats.misses++;
}

View File

@@ -1,8 +1,9 @@
import { db, logger } from '../config.js';
import { qsos } from '../db/schema/index.js';
import { qsos, qsoChanges } from '../db/schema/index.js';
import { max, sql, eq, and, desc } from 'drizzle-orm';
import { updateJobProgress } from './job-queue.service.js';
import { parseDCLResponse, normalizeBand, normalizeMode } from '../utils/adif-parser.js';
import { invalidateUserCache, invalidateStatsCache } from './cache.service.js';
/**
* DCL (DARC Community Logbook) Service
@@ -148,6 +149,7 @@ function convertQSODatabaseFormat(adifQSO, userId) {
mode: normalizeMode(adifQSO.mode),
freq: adifQSO.freq ? parseInt(adifQSO.freq) : null,
freqRx: adifQSO.freq_rx ? parseInt(adifQSO.freq_rx) : null,
// DCL may or may not include DXCC fields - use them if available
entity: adifQSO.country || adifQSO.dxcc_country || '',
entityId: adifQSO.dxcc ? parseInt(adifQSO.dxcc) : null,
grid: adifQSO.gridsquare || '',
@@ -168,7 +170,22 @@ function convertQSODatabaseFormat(adifQSO, userId) {
}
/**
* Sync QSOs from DCL to database
* Yield to event loop to allow other requests to be processed
* This prevents blocking the server during long-running sync operations
*/
function yieldToEventLoop() {
return new Promise(resolve => setImmediate(resolve));
}
/**
* Get QSO key for duplicate detection
*/
function getQSOKey(qso) {
return `${qso.callsign}|${qso.qsoDate}|${qso.timeOn}|${qso.band}|${qso.mode}`;
}
/**
* Sync QSOs from DCL to database (optimized with batch operations)
* Updates existing QSOs with DCL confirmation data
*
* @param {number} userId - User ID
@@ -217,106 +234,215 @@ export async function syncQSOs(userId, dclApiKey, sinceDate = null, jobId = null
const addedQSOs = [];
const updatedQSOs = [];
for (let i = 0; i < adifQSOs.length; i++) {
const adifQSO = adifQSOs[i];
// Convert all QSOs to database format
const dbQSOs = adifQSOs.map(qso => convertQSODatabaseFormat(qso, userId));
try {
const dbQSO = convertQSODatabaseFormat(adifQSO, userId);
// Batch size for processing
const BATCH_SIZE = 100;
const totalBatches = Math.ceil(dbQSOs.length / BATCH_SIZE);
// Check if QSO already exists (match by callsign, date, time, band, mode)
const existing = await db
.select()
.from(qsos)
.where(
and(
eq(qsos.userId, userId),
eq(qsos.callsign, dbQSO.callsign),
eq(qsos.qsoDate, dbQSO.qsoDate),
eq(qsos.timeOn, dbQSO.timeOn),
eq(qsos.band, dbQSO.band),
eq(qsos.mode, dbQSO.mode)
)
for (let batchNum = 0; batchNum < totalBatches; batchNum++) {
const startIdx = batchNum * BATCH_SIZE;
const endIdx = Math.min(startIdx + BATCH_SIZE, dbQSOs.length);
const batch = dbQSOs.slice(startIdx, endIdx);
// Get unique callsigns and dates from batch
const batchCallsigns = [...new Set(batch.map(q => q.callsign))];
const batchDates = [...new Set(batch.map(q => q.qsoDate))];
// Fetch all existing QSOs that could match this batch in one query
const existingQSOs = await db
.select()
.from(qsos)
.where(
and(
eq(qsos.userId, userId),
// Match callsigns OR dates from this batch
sql`(${qsos.callsign} IN ${batchCallsigns} OR ${qsos.qsoDate} IN ${batchDates})`
)
.limit(1);
);
if (existing.length > 0) {
const existingQSO = existing[0];
// Build lookup map for existing QSOs
const existingMap = new Map();
for (const existing of existingQSOs) {
const key = getQSOKey(existing);
existingMap.set(key, existing);
}
// Check if DCL confirmation or DOK data has changed
const dataChanged =
existingQSO.dclQslRstatus !== dbQSO.dclQslRstatus ||
existingQSO.dclQslRdate !== dbQSO.dclQslRdate ||
existingQSO.darcDok !== (dbQSO.darcDok || existingQSO.darcDok) ||
existingQSO.myDarcDok !== (dbQSO.myDarcDok || existingQSO.myDarcDok) ||
existingQSO.grid !== (dbQSO.grid || existingQSO.grid);
// Process batch
const toInsert = [];
const toUpdate = [];
const changeRecords = [];
if (dataChanged) {
// Update existing QSO with changed DCL confirmation and DOK data
// Only update DOK/grid fields if DCL actually sent values (non-empty)
const updateData = {
dclQslRdate: dbQSO.dclQslRdate,
dclQslRstatus: dbQSO.dclQslRstatus,
};
for (const dbQSO of batch) {
try {
const key = getQSOKey(dbQSO);
const existingQSO = existingMap.get(key);
// Only add DOK fields if DCL sent them
if (dbQSO.darcDok) updateData.darcDok = dbQSO.darcDok;
if (dbQSO.myDarcDok) updateData.myDarcDok = dbQSO.myDarcDok;
if (existingQSO) {
// Check if DCL confirmation or DOK data has changed
const dataChanged =
existingQSO.dclQslRstatus !== dbQSO.dclQslRstatus ||
existingQSO.dclQslRdate !== dbQSO.dclQslRdate ||
existingQSO.darcDok !== (dbQSO.darcDok || existingQSO.darcDok) ||
existingQSO.myDarcDok !== (dbQSO.myDarcDok || existingQSO.myDarcDok) ||
existingQSO.grid !== (dbQSO.grid || existingQSO.grid);
// Only update grid if DCL sent one
if (dbQSO.grid) {
updateData.grid = dbQSO.grid;
updateData.gridSource = dbQSO.gridSource;
if (dataChanged) {
// Build update data
const updateData = {
dclQslRdate: dbQSO.dclQslRdate,
dclQslRstatus: dbQSO.dclQslRstatus,
};
// Only add DOK fields if DCL sent them
if (dbQSO.darcDok) updateData.darcDok = dbQSO.darcDok;
if (dbQSO.myDarcDok) updateData.myDarcDok = dbQSO.myDarcDok;
// Only update grid if DCL sent one
if (dbQSO.grid) {
updateData.grid = dbQSO.grid;
updateData.gridSource = dbQSO.gridSource;
}
// DXCC priority: LoTW > DCL
// Only update entity fields from DCL if:
// 1. QSO is NOT LoTW confirmed, AND
// 2. DCL actually sent entity data, AND
// 3. Current entity is missing
const hasLoTWConfirmation = existingQSO.lotwQslRstatus === 'Y';
const hasDCLData = dbQSO.entity || dbQSO.entityId;
const missingEntity = !existingQSO.entity || existingQSO.entity === '';
if (!hasLoTWConfirmation && hasDCLData && missingEntity) {
if (dbQSO.entity) updateData.entity = dbQSO.entity;
if (dbQSO.entityId) updateData.entityId = dbQSO.entityId;
if (dbQSO.continent) updateData.continent = dbQSO.continent;
if (dbQSO.cqZone) updateData.cqZone = dbQSO.cqZone;
if (dbQSO.ituZone) updateData.ituZone = dbQSO.ituZone;
}
toUpdate.push({
id: existingQSO.id,
data: updateData,
});
// Track change for rollback
if (jobId) {
changeRecords.push({
jobId,
qsoId: existingQSO.id,
changeType: 'updated',
beforeData: JSON.stringify({
dclQslRstatus: existingQSO.dclQslRstatus,
dclQslRdate: existingQSO.dclQslRdate,
darcDok: existingQSO.darcDok,
myDarcDok: existingQSO.myDarcDok,
grid: existingQSO.grid,
gridSource: existingQSO.gridSource,
entity: existingQSO.entity,
entityId: existingQSO.entityId,
}),
afterData: JSON.stringify({
dclQslRstatus: dbQSO.dclQslRstatus,
dclQslRdate: dbQSO.dclQslRdate,
darcDok: updateData.darcDok,
myDarcDok: updateData.myDarcDok,
grid: updateData.grid,
gridSource: updateData.gridSource,
entity: updateData.entity,
entityId: updateData.entityId,
}),
});
}
updatedQSOs.push({
id: existingQSO.id,
callsign: dbQSO.callsign,
date: dbQSO.qsoDate,
band: dbQSO.band,
mode: dbQSO.mode,
});
updatedCount++;
} else {
skippedCount++;
}
await db
.update(qsos)
.set(updateData)
.where(eq(qsos.id, existingQSO.id));
updatedCount++;
// Track updated QSO (CALL and DATE)
updatedQSOs.push({
} else {
// New QSO to insert
toInsert.push(dbQSO);
addedQSOs.push({
callsign: dbQSO.callsign,
date: dbQSO.qsoDate,
band: dbQSO.band,
mode: dbQSO.mode,
});
} else {
// Skip - same data
skippedCount++;
addedCount++;
}
} else {
// Insert new QSO
await db.insert(qsos).values(dbQSO);
addedCount++;
// Track added QSO (CALL and DATE)
addedQSOs.push({
callsign: dbQSO.callsign,
date: dbQSO.qsoDate,
band: dbQSO.band,
mode: dbQSO.mode,
} catch (error) {
logger.error('Failed to process DCL QSO in batch', {
error: error.message,
qso: dbQSO,
userId,
});
errors.push({ qso: dbQSO, error: error.message });
}
// Update job progress every 10 QSOs
if (jobId && (i + 1) % 10 === 0) {
await updateJobProgress(jobId, {
processed: i + 1,
message: `Processed ${i + 1}/${adifQSOs.length} QSOs from DCL...`,
});
}
} catch (error) {
logger.error('Failed to process DCL QSO', {
error: error.message,
qso: adifQSO,
userId,
});
errors.push({ qso: adifQSO, error: error.message });
}
// Batch insert new QSOs
if (toInsert.length > 0) {
const inserted = await db.insert(qsos).values(toInsert).returning();
// Track inserted QSOs with their IDs for change tracking
if (jobId) {
for (let i = 0; i < inserted.length; i++) {
changeRecords.push({
jobId,
qsoId: inserted[i].id,
changeType: 'added',
beforeData: null,
afterData: JSON.stringify({
callsign: toInsert[i].callsign,
qsoDate: toInsert[i].qsoDate,
timeOn: toInsert[i].timeOn,
band: toInsert[i].band,
mode: toInsert[i].mode,
}),
});
// Update addedQSOs with actual IDs
addedQSOs[addedCount - inserted.length + i].id = inserted[i].id;
}
}
}
// Batch update existing QSOs
if (toUpdate.length > 0) {
for (const update of toUpdate) {
await db
.update(qsos)
.set(update.data)
.where(eq(qsos.id, update.id));
}
}
// Batch insert change records
if (changeRecords.length > 0) {
await db.insert(qsoChanges).values(changeRecords);
}
// Update job progress after each batch
if (jobId) {
await updateJobProgress(jobId, {
processed: endIdx,
message: `Processed ${endIdx}/${dbQSOs.length} QSOs from DCL...`,
});
}
// Yield to event loop after each batch to allow other requests
await yieldToEventLoop();
}
const result = {
success: true,
total: adifQSOs.length,
total: dbQSOs.length,
added: addedCount,
updated: updatedCount,
skipped: skippedCount,
@@ -332,6 +458,11 @@ export async function syncQSOs(userId, dclApiKey, sinceDate = null, jobId = null
jobId,
});
// Invalidate award cache for this user since QSOs may have changed
const deletedCache = invalidateUserCache(userId);
invalidateStatsCache(userId);
logger.debug(`Invalidated ${deletedCache} cached award entries and stats cache for user ${userId}`);
return result;
} catch (error) {

View File

@@ -1,6 +1,6 @@
import { db, logger } from '../config.js';
import { syncJobs } from '../db/schema/index.js';
import { eq, and, or, lt } from 'drizzle-orm';
import { syncJobs, qsoChanges, qsos } from '../db/schema/index.js';
import { eq, and, or, lt, desc } from 'drizzle-orm';
/**
* Simplified Background Job Queue Service
@@ -252,7 +252,7 @@ export async function getUserActiveJob(userId, jobType = null) {
.select()
.from(syncJobs)
.where(and(...conditions))
.orderBy(syncJobs.createdAt)
.orderBy(desc(syncJobs.createdAt))
.limit(1);
return job || null;
@@ -269,7 +269,7 @@ export async function getUserJobs(userId, limit = 10) {
.select()
.from(syncJobs)
.where(eq(syncJobs.userId, userId))
.orderBy(syncJobs.createdAt)
.orderBy(desc(syncJobs.createdAt))
.limit(limit);
return jobs.map((job) => {
@@ -342,3 +342,110 @@ export async function updateJobProgress(jobId, progressData) {
result: JSON.stringify(updatedData),
});
}
/**
* Cancel and rollback a sync job
* Deletes added QSOs and restores updated QSOs to their previous state
* @param {number} jobId - Job ID to cancel
* @param {number} userId - User ID (for security check)
* @returns {Promise<Object>} Result of cancellation
*/
export async function cancelJob(jobId, userId) {
logger.info('Cancelling job', { jobId, userId });
// Get job to verify ownership
const job = await getJob(jobId);
if (!job) {
return { success: false, error: 'Job not found' };
}
// Verify user owns this job
if (job.userId !== userId) {
return { success: false, error: 'Forbidden' };
}
// Only allow cancelling failed jobs or stale running jobs
const isStale = job.status === JobStatus.RUNNING && job.startedAt &&
(Date.now() - new Date(job.startedAt).getTime()) > 60 * 60 * 1000; // 1 hour
if (job.status === JobStatus.PENDING) {
return { success: false, error: 'Cannot cancel pending jobs' };
}
if (job.status === JobStatus.COMPLETED) {
return { success: false, error: 'Cannot cancel completed jobs' };
}
if (job.status === JobStatus.RUNNING && !isStale) {
return { success: false, error: 'Cannot cancel active jobs (only stale jobs older than 1 hour)' };
}
// Get all QSO changes for this job
const changes = await db
.select()
.from(qsoChanges)
.where(eq(qsoChanges.jobId, jobId));
let deletedAdded = 0;
let restoredUpdated = 0;
for (const change of changes) {
if (change.changeType === 'added' && change.qsoId) {
// Delete the QSO that was added
await db.delete(qsos).where(eq(qsos.id, change.qsoId));
deletedAdded++;
} else if (change.changeType === 'updated' && change.qsoId && change.beforeData) {
// Restore the QSO to its previous state
try {
const beforeData = JSON.parse(change.beforeData);
// Build update object based on job type
const updateData = {};
if (job.type === 'lotw_sync') {
if (beforeData.lotwQslRstatus !== undefined) updateData.lotwQslRstatus = beforeData.lotwQslRstatus;
if (beforeData.lotwQslRdate !== undefined) updateData.lotwQslRdate = beforeData.lotwQslRdate;
} else if (job.type === 'dcl_sync') {
if (beforeData.dclQslRstatus !== undefined) updateData.dclQslRstatus = beforeData.dclQslRstatus;
if (beforeData.dclQslRdate !== undefined) updateData.dclQslRdate = beforeData.dclQslRdate;
if (beforeData.darcDok !== undefined) updateData.darcDok = beforeData.darcDok;
if (beforeData.myDarcDok !== undefined) updateData.myDarcDok = beforeData.myDarcDok;
if (beforeData.grid !== undefined) updateData.grid = beforeData.grid;
if (beforeData.gridSource !== undefined) updateData.gridSource = beforeData.gridSource;
if (beforeData.entity !== undefined) updateData.entity = beforeData.entity;
if (beforeData.entityId !== undefined) updateData.entityId = beforeData.entityId;
}
if (Object.keys(updateData).length > 0) {
await db.update(qsos).set(updateData).where(eq(qsos.id, change.qsoId));
restoredUpdated++;
}
} catch (error) {
logger.error('Failed to restore QSO', { qsoId: change.qsoId, error: error.message });
}
}
}
// Delete all change records for this job
await db.delete(qsoChanges).where(eq(qsoChanges.jobId, jobId));
// Update job status to cancelled
await updateJob(jobId, {
status: 'cancelled',
completedAt: new Date(),
result: JSON.stringify({
cancelled: true,
deletedAdded,
restoredUpdated,
}),
});
logger.info('Job cancelled successfully', { jobId, deletedAdded, restoredUpdated });
return {
success: true,
message: `Job cancelled: ${deletedAdded} QSOs deleted, ${restoredUpdated} QSOs restored`,
deletedAdded,
restoredUpdated,
};
}

View File

@@ -1,8 +1,10 @@
import { db, logger } from '../config.js';
import { qsos } from '../db/schema/index.js';
import { max, sql, eq, and, desc } from 'drizzle-orm';
import { qsos, qsoChanges } from '../db/schema/index.js';
import { max, sql, eq, and, or, desc, like } from 'drizzle-orm';
import { updateJobProgress } from './job-queue.service.js';
import { parseADIF, normalizeBand, normalizeMode } from '../utils/adif-parser.js';
import { invalidateUserCache, getCachedStats, setCachedStats, invalidateStatsCache } from './cache.service.js';
import { trackQueryPerformance, getPerformanceSummary, resetPerformanceMetrics } from './performance.service.js';
/**
* LoTW (Logbook of the World) Service
@@ -14,6 +16,35 @@ const MAX_RETRIES = 30;
const RETRY_DELAY = 10000;
const REQUEST_TIMEOUT = 60000;
/**
* SECURITY: Sanitize search input to prevent injection and DoS
* Limits length and removes potentially harmful characters
*/
function sanitizeSearchInput(searchTerm) {
if (!searchTerm || typeof searchTerm !== 'string') {
return '';
}
// Trim whitespace
let sanitized = searchTerm.trim();
// Limit length (DoS prevention)
const MAX_SEARCH_LENGTH = 100;
if (sanitized.length > MAX_SEARCH_LENGTH) {
sanitized = sanitized.substring(0, MAX_SEARCH_LENGTH);
}
// Remove potentially dangerous SQL pattern wildcards from user input
// We'll add our own wildcards for the LIKE query
// Note: Drizzle ORM escapes parameters, but this adds defense-in-depth
sanitized = sanitized.replace(/[%_\\]/g, '');
// Remove null bytes and other control characters
sanitized = sanitized.replace(/[\x00-\x1F\x7F]/g, '');
return sanitized;
}
/**
* Check if LoTW response indicates the report is still being prepared
*/
@@ -180,7 +211,22 @@ function convertQSODatabaseFormat(adifQSO, userId) {
}
/**
* Sync QSOs from LoTW to database
* Yield to event loop to allow other requests to be processed
* This prevents blocking the server during long-running sync operations
*/
function yieldToEventLoop() {
return new Promise(resolve => setImmediate(resolve));
}
/**
* Get QSO key for duplicate detection
*/
function getQSOKey(qso) {
return `${qso.callsign}|${qso.qsoDate}|${qso.timeOn}|${qso.band}|${qso.mode}`;
}
/**
* Sync QSOs from LoTW to database (optimized with batch operations)
* @param {number} userId - User ID
* @param {string} lotwUsername - LoTW username
* @param {string} lotwPassword - LoTW password
@@ -227,86 +273,177 @@ export async function syncQSOs(userId, lotwUsername, lotwPassword, sinceDate = n
const addedQSOs = [];
const updatedQSOs = [];
for (let i = 0; i < adifQSOs.length; i++) {
const qsoData = adifQSOs[i];
// Convert all QSOs to database format
const dbQSOs = adifQSOs.map(qsoData => convertQSODatabaseFormat(qsoData, userId));
try {
const dbQSO = convertQSODatabaseFormat(qsoData, userId);
// Batch size for processing
const BATCH_SIZE = 100;
const totalBatches = Math.ceil(dbQSOs.length / BATCH_SIZE);
const existing = await db
.select()
.from(qsos)
.where(
and(
eq(qsos.userId, userId),
eq(qsos.callsign, dbQSO.callsign),
eq(qsos.qsoDate, dbQSO.qsoDate),
eq(qsos.timeOn, dbQSO.timeOn),
eq(qsos.band, dbQSO.band),
eq(qsos.mode, dbQSO.mode)
)
for (let batchNum = 0; batchNum < totalBatches; batchNum++) {
const startIdx = batchNum * BATCH_SIZE;
const endIdx = Math.min(startIdx + BATCH_SIZE, dbQSOs.length);
const batch = dbQSOs.slice(startIdx, endIdx);
// Build condition for batch duplicate check
// Get unique callsigns, dates, bands, modes from batch
const batchCallsigns = [...new Set(batch.map(q => q.callsign))];
const batchDates = [...new Set(batch.map(q => q.qsoDate))];
// Fetch all existing QSOs that could match this batch in one query
const existingQSOs = await db
.select()
.from(qsos)
.where(
and(
eq(qsos.userId, userId),
// Match callsigns OR dates from this batch
sql`(${qsos.callsign} IN ${batchCallsigns} OR ${qsos.qsoDate} IN ${batchDates})`
)
.limit(1);
);
if (existing.length > 0) {
const existingQSO = existing[0];
// Build lookup map for existing QSOs
const existingMap = new Map();
for (const existing of existingQSOs) {
const key = getQSOKey(existing);
existingMap.set(key, existing);
}
// Check if LoTW confirmation data has changed
const confirmationChanged =
existingQSO.lotwQslRstatus !== dbQSO.lotwQslRstatus ||
existingQSO.lotwQslRdate !== dbQSO.lotwQslRdate;
// Process batch
const toInsert = [];
const toUpdate = [];
const changeRecords = [];
if (confirmationChanged) {
await db
.update(qsos)
.set({
for (const dbQSO of batch) {
try {
const key = getQSOKey(dbQSO);
const existingQSO = existingMap.get(key);
if (existingQSO) {
// Check if LoTW confirmation data has changed
const confirmationChanged =
existingQSO.lotwQslRstatus !== dbQSO.lotwQslRstatus ||
existingQSO.lotwQslRdate !== dbQSO.lotwQslRdate;
if (confirmationChanged) {
toUpdate.push({
id: existingQSO.id,
lotwQslRdate: dbQSO.lotwQslRdate,
lotwQslRstatus: dbQSO.lotwQslRstatus,
lotwSyncedAt: dbQSO.lotwSyncedAt,
})
.where(eq(qsos.id, existingQSO.id));
updatedCount++;
// Track updated QSO (CALL and DATE)
updatedQSOs.push({
});
// Track change for rollback
if (jobId) {
changeRecords.push({
jobId,
qsoId: existingQSO.id,
changeType: 'updated',
beforeData: JSON.stringify({
lotwQslRstatus: existingQSO.lotwQslRstatus,
lotwQslRdate: existingQSO.lotwQslRdate,
}),
afterData: JSON.stringify({
lotwQslRstatus: dbQSO.lotwQslRstatus,
lotwQslRdate: dbQSO.lotwQslRdate,
}),
});
}
updatedQSOs.push({
id: existingQSO.id,
callsign: dbQSO.callsign,
date: dbQSO.qsoDate,
band: dbQSO.band,
mode: dbQSO.mode,
});
updatedCount++;
} else {
skippedCount++;
}
} else {
// New QSO to insert
toInsert.push(dbQSO);
addedQSOs.push({
callsign: dbQSO.callsign,
date: dbQSO.qsoDate,
band: dbQSO.band,
mode: dbQSO.mode,
});
} else {
// Skip - same data
skippedCount++;
addedCount++;
}
} else {
await db.insert(qsos).values(dbQSO);
addedCount++;
// Track added QSO (CALL and DATE)
addedQSOs.push({
callsign: dbQSO.callsign,
date: dbQSO.qsoDate,
band: dbQSO.band,
mode: dbQSO.mode,
});
} catch (error) {
logger.error('Error processing QSO in batch', { error: error.message, jobId, qso: dbQSO });
errors.push({ qso: dbQSO, error: error.message });
}
// Update job progress every 10 QSOs
if (jobId && (i + 1) % 10 === 0) {
await updateJobProgress(jobId, {
processed: i + 1,
message: `Processed ${i + 1}/${adifQSOs.length} QSOs...`,
});
}
} catch (error) {
logger.error('Error processing QSO', { error: error.message, jobId, qso: qsoData });
errors.push({ qso: qsoData, error: error.message });
}
// Batch insert new QSOs
if (toInsert.length > 0) {
const inserted = await db.insert(qsos).values(toInsert).returning();
// Track inserted QSOs with their IDs for change tracking
if (jobId) {
for (let i = 0; i < inserted.length; i++) {
changeRecords.push({
jobId,
qsoId: inserted[i].id,
changeType: 'added',
beforeData: null,
afterData: JSON.stringify({
callsign: toInsert[i].callsign,
qsoDate: toInsert[i].qsoDate,
timeOn: toInsert[i].timeOn,
band: toInsert[i].band,
mode: toInsert[i].mode,
}),
});
// Update addedQSOs with actual IDs
addedQSOs[addedCount - inserted.length + i].id = inserted[i].id;
}
}
}
// Batch update existing QSOs
if (toUpdate.length > 0) {
for (const update of toUpdate) {
await db
.update(qsos)
.set({
lotwQslRdate: update.lotwQslRdate,
lotwQslRstatus: update.lotwQslRstatus,
lotwSyncedAt: update.lotwSyncedAt,
})
.where(eq(qsos.id, update.id));
}
}
// Batch insert change records
if (changeRecords.length > 0) {
await db.insert(qsoChanges).values(changeRecords);
}
// Update job progress after each batch
if (jobId) {
await updateJobProgress(jobId, {
processed: endIdx,
message: `Processed ${endIdx}/${dbQSOs.length} QSOs...`,
});
}
// Yield to event loop after each batch to allow other requests
await yieldToEventLoop();
}
logger.info('LoTW sync completed', { total: adifQSOs.length, added: addedCount, updated: updatedCount, skipped: skippedCount, jobId });
logger.info('LoTW sync completed', { total: dbQSOs.length, added: addedCount, updated: updatedCount, skipped: skippedCount, jobId });
// Invalidate award and stats cache for this user since QSOs may have changed
const deletedCache = invalidateUserCache(userId);
invalidateStatsCache(userId);
logger.debug(`Invalidated ${deletedCache} cached award entries and stats cache for user ${userId}`);
return {
success: true,
total: adifQSOs.length,
total: dbQSOs.length,
added: addedCount,
updated: updatedCount,
skipped: skippedCount,
@@ -322,14 +459,69 @@ export async function syncQSOs(userId, lotwUsername, lotwPassword, sinceDate = n
export async function getUserQSOs(userId, filters = {}, options = {}) {
const { page = 1, limit = 100 } = options;
logger.debug('getUserQSOs called', { userId, filters, options });
const conditions = [eq(qsos.userId, userId)];
if (filters.band) conditions.push(eq(qsos.band, filters.band));
if (filters.mode) conditions.push(eq(qsos.mode, filters.mode));
if (filters.confirmed) conditions.push(eq(qsos.lotwQslRstatus, 'Y'));
const allResults = await db.select().from(qsos).where(and(...conditions));
const totalCount = allResults.length;
// Confirmation type filter: lotw, dcl, both, none
if (filters.confirmationType) {
logger.debug('Applying confirmation type filter', { confirmationType: filters.confirmationType });
if (filters.confirmationType === 'lotw') {
// LoTW only: Confirmed by LoTW but NOT by DCL
conditions.push(eq(qsos.lotwQslRstatus, 'Y'));
conditions.push(
sql`(${qsos.dclQslRstatus} IS NULL OR ${qsos.dclQslRstatus} != 'Y')`
);
} else if (filters.confirmationType === 'dcl') {
// DCL only: Confirmed by DCL but NOT by LoTW
conditions.push(eq(qsos.dclQslRstatus, 'Y'));
conditions.push(
sql`(${qsos.lotwQslRstatus} IS NULL OR ${qsos.lotwQslRstatus} != 'Y')`
);
} else if (filters.confirmationType === 'both') {
// Both confirmed: Confirmed by LoTW AND DCL
conditions.push(eq(qsos.lotwQslRstatus, 'Y'));
conditions.push(eq(qsos.dclQslRstatus, 'Y'));
} else if (filters.confirmationType === 'any') {
// Confirmed by at least 1 service: LoTW OR DCL
conditions.push(
sql`(${qsos.lotwQslRstatus} = 'Y' OR ${qsos.dclQslRstatus} = 'Y')`
);
} else if (filters.confirmationType === 'none') {
// Not confirmed: Not confirmed by LoTW AND not confirmed by DCL
conditions.push(
sql`(${qsos.lotwQslRstatus} IS NULL OR ${qsos.lotwQslRstatus} != 'Y')`
);
conditions.push(
sql`(${qsos.dclQslRstatus} IS NULL OR ${qsos.dclQslRstatus} != 'Y')`
);
}
}
// Search filter: callsign, entity, or grid
if (filters.search) {
// SECURITY: Sanitize search input to prevent injection
const sanitized = sanitizeSearchInput(filters.search);
if (sanitized) {
const searchTerm = `%${sanitized}%`;
conditions.push(or(
like(qsos.callsign, searchTerm),
like(qsos.entity, searchTerm),
like(qsos.grid, searchTerm)
));
}
}
// Use SQL COUNT for efficient pagination (avoids loading all QSOs into memory)
const [{ count }] = await db
.select({ count: sql`CAST(count(*) AS INTEGER)` })
.from(qsos)
.where(and(...conditions));
const totalCount = count;
const offset = (page - 1) * limit;
@@ -358,26 +550,40 @@ export async function getUserQSOs(userId, filters = {}, options = {}) {
* Get QSO statistics for a user
*/
export async function getQSOStats(userId) {
const allQSOs = await db.select().from(qsos).where(eq(qsos.userId, userId));
const confirmed = allQSOs.filter((q) => q.lotwQslRstatus === 'Y');
// Check cache first
const cached = getCachedStats(userId);
if (cached) {
return cached;
}
const uniqueEntities = new Set();
const uniqueBands = new Set();
const uniqueModes = new Set();
// Calculate stats from database with performance tracking
const stats = await trackQueryPerformance('getQSOStats', async () => {
const [basicStats, uniqueStats] = await Promise.all([
db.select({
total: sql`CAST(COUNT(*) AS INTEGER)`,
confirmed: sql`CAST(SUM(CASE WHEN lotw_qsl_rstatus = 'Y' OR dcl_qsl_rstatus = 'Y' THEN 1 ELSE 0 END) AS INTEGER)`
}).from(qsos).where(eq(qsos.userId, userId)),
allQSOs.forEach((q) => {
if (q.entity) uniqueEntities.add(q.entity);
if (q.band) uniqueBands.add(q.band);
if (q.mode) uniqueModes.add(q.mode);
db.select({
uniqueEntities: sql`CAST(COUNT(DISTINCT entity) AS INTEGER)`,
uniqueBands: sql`CAST(COUNT(DISTINCT band) AS INTEGER)`,
uniqueModes: sql`CAST(COUNT(DISTINCT mode) AS INTEGER)`
}).from(qsos).where(eq(qsos.userId, userId))
]);
return {
total: basicStats[0].total,
confirmed: basicStats[0].confirmed || 0,
uniqueEntities: uniqueStats[0].uniqueEntities || 0,
uniqueBands: uniqueStats[0].uniqueBands || 0,
uniqueModes: uniqueStats[0].uniqueModes || 0,
};
});
return {
total: allQSOs.length,
confirmed: confirmed.length,
uniqueEntities: uniqueEntities.size,
uniqueBands: uniqueBands.size,
uniqueModes: uniqueModes.size,
};
// Cache results
setCachedStats(userId, stats);
return stats;
}
/**
@@ -409,3 +615,18 @@ export async function deleteQSOs(userId) {
return result;
}
/**
* Get a single QSO by ID for a specific user
* @param {number} userId - User ID
* @param {number} qsoId - QSO ID
* @returns {Object|null} QSO object or null if not found
*/
export async function getQSOById(userId, qsoId) {
const result = await db
.select()
.from(qsos)
.where(and(eq(qsos.userId, userId), eq(qsos.id, qsoId)));
return result.length > 0 ? result[0] : null;
}

View File

@@ -0,0 +1,274 @@
/**
* Performance Monitoring Service
*
* Tracks query performance metrics to identify slow queries and detect regressions.
*
* Features:
* - Track individual query performance
* - Calculate averages and percentiles
* - Detect slow queries automatically
* - Provide performance statistics for monitoring
*
* Usage:
* const result = await trackQueryPerformance('getQSOStats', async () => {
* return await someExpensiveOperation();
* });
*/
// Performance metrics storage
const queryMetrics = new Map();
// Thresholds for slow queries
const SLOW_QUERY_THRESHOLD = 100; // 100ms = slow
const CRITICAL_QUERY_THRESHOLD = 500; // 500ms = critical
/**
* Track query performance and log results
* @param {string} queryName - Name of the query/operation
* @param {Function} fn - Async function to execute and track
* @returns {Promise<any>} Result of the function
*/
export async function trackQueryPerformance(queryName, fn) {
const start = performance.now();
let result;
let error = null;
try {
result = await fn();
} catch (err) {
error = err;
throw err; // Re-throw error
} finally {
const duration = performance.now() - start;
recordQueryMetric(queryName, duration, error);
// Log slow queries
if (duration > CRITICAL_QUERY_THRESHOLD) {
console.error(`🚨 CRITICAL SLOW QUERY: ${queryName} took ${duration.toFixed(2)}ms`);
} else if (duration > SLOW_QUERY_THRESHOLD) {
console.warn(`⚠️ SLOW QUERY: ${queryName} took ${duration.toFixed(2)}ms`);
} else {
console.log(`✅ Query Performance: ${queryName} - ${duration.toFixed(2)}ms`);
}
}
return result;
}
/**
* Record a query metric for later analysis
* @param {string} queryName - Name of the query
* @param {number} duration - Query duration in milliseconds
* @param {Error|null} error - Error if query failed
*/
function recordQueryMetric(queryName, duration, error = null) {
if (!queryMetrics.has(queryName)) {
queryMetrics.set(queryName, {
count: 0,
totalTime: 0,
minTime: Infinity,
maxTime: 0,
errors: 0,
durations: [] // Keep recent durations for percentile calculation
});
}
const metrics = queryMetrics.get(queryName);
metrics.count++;
metrics.totalTime += duration;
metrics.minTime = Math.min(metrics.minTime, duration);
metrics.maxTime = Math.max(metrics.maxTime, duration);
if (error) metrics.errors++;
// Keep last 100 durations for percentile calculation
metrics.durations.push(duration);
if (metrics.durations.length > 100) {
metrics.durations.shift();
}
}
/**
* Get performance statistics for a specific query or all queries
* @param {string|null} queryName - Query name or null for all queries
* @returns {object} Performance statistics
*/
export function getPerformanceStats(queryName = null) {
if (queryName) {
const metrics = queryMetrics.get(queryName);
if (!metrics) {
return null;
}
return calculateQueryStats(queryName, metrics);
}
// Get stats for all queries
const stats = {};
for (const [name, metrics] of queryMetrics.entries()) {
stats[name] = calculateQueryStats(name, metrics);
}
return stats;
}
/**
* Calculate statistics for a query
* @param {string} queryName - Name of the query
* @param {object} metrics - Raw metrics
* @returns {object} Calculated statistics
*/
function calculateQueryStats(queryName, metrics) {
const avgTime = metrics.totalTime / metrics.count;
// Calculate percentiles (P50, P95, P99)
const sorted = [...metrics.durations].sort((a, b) => a - b);
const p50 = sorted[Math.floor(sorted.length * 0.5)] || 0;
const p95 = sorted[Math.floor(sorted.length * 0.95)] || 0;
const p99 = sorted[Math.floor(sorted.length * 0.99)] || 0;
// Determine performance rating
let rating = 'EXCELLENT';
if (avgTime > CRITICAL_QUERY_THRESHOLD) {
rating = 'CRITICAL';
} else if (avgTime > SLOW_QUERY_THRESHOLD) {
rating = 'SLOW';
} else if (avgTime > 50) {
rating = 'GOOD';
}
return {
name: queryName,
count: metrics.count,
avgTime: avgTime.toFixed(2) + 'ms',
minTime: metrics.minTime.toFixed(2) + 'ms',
maxTime: metrics.maxTime.toFixed(2) + 'ms',
p50: p50.toFixed(2) + 'ms',
p95: p95.toFixed(2) + 'ms',
p99: p99.toFixed(2) + 'ms',
errors: metrics.errors,
errorRate: ((metrics.errors / metrics.count) * 100).toFixed(2) + '%',
rating
};
}
/**
* Get overall performance summary
* @returns {object} Summary of all query performance
*/
export function getPerformanceSummary() {
if (queryMetrics.size === 0) {
return {
totalQueries: 0,
totalTime: 0,
avgTime: '0ms',
slowQueries: 0,
criticalQueries: 0,
topSlowest: []
};
}
let totalQueries = 0;
let totalTime = 0;
let slowQueries = 0;
let criticalQueries = 0;
const allStats = [];
for (const [name, metrics] of queryMetrics.entries()) {
const stats = calculateQueryStats(name, metrics);
totalQueries += metrics.count;
totalTime += metrics.totalTime;
const avgTime = metrics.totalTime / metrics.count;
if (avgTime > CRITICAL_QUERY_THRESHOLD) {
criticalQueries++;
} else if (avgTime > SLOW_QUERY_THRESHOLD) {
slowQueries++;
}
allStats.push(stats);
}
// Sort by average time (slowest first)
const topSlowest = allStats
.sort((a, b) => parseFloat(b.avgTime) - parseFloat(a.avgTime))
.slice(0, 10);
return {
totalQueries,
totalTime: totalTime.toFixed(2) + 'ms',
avgTime: (totalTime / totalQueries).toFixed(2) + 'ms',
slowQueries,
criticalQueries,
topSlowest
};
}
/**
* Reset performance metrics (for testing)
*/
export function resetPerformanceMetrics() {
queryMetrics.clear();
console.log('Performance metrics cleared');
}
/**
* Get slow queries (above threshold)
* @param {number} threshold - Duration threshold in ms (default: 100ms)
* @returns {Array} Array of slow query statistics
*/
export function getSlowQueries(threshold = SLOW_QUERY_THRESHOLD) {
const slowQueries = [];
for (const [name, metrics] of queryMetrics.entries()) {
const avgTime = metrics.totalTime / metrics.count;
if (avgTime > threshold) {
slowQueries.push(calculateQueryStats(name, metrics));
}
}
// Sort by average time (slowest first)
return slowQueries.sort((a, b) => parseFloat(b.avgTime) - parseFloat(a.avgTime));
}
/**
* Performance monitoring utility for database queries
* @param {string} queryName - Name of the query
* @param {Function} queryFn - Query function to track
* @returns {Promise<any>} Query result
*/
export async function trackQuery(queryName, queryFn) {
return trackQueryPerformance(queryName, queryFn);
}
/**
* Check if performance is degrading (compares recent vs overall average)
* @param {string} queryName - Query name to check
* @param {number} windowSize - Number of recent queries to compare (default: 10)
* @returns {object} Degradation status
*/
export function checkPerformanceDegradation(queryName, windowSize = 10) {
const metrics = queryMetrics.get(queryName);
if (!metrics || metrics.durations.length < windowSize * 2) {
return {
degraded: false,
message: 'Insufficient data'
};
}
// Recent queries (last N)
const recentDurations = metrics.durations.slice(-windowSize);
const avgRecent = recentDurations.reduce((a, b) => a + b, 0) / recentDurations.length;
// Overall average
const avgOverall = metrics.totalTime / metrics.count;
// Check if recent is 2x worse than overall
const degraded = avgRecent > avgOverall * 2;
const change = ((avgRecent - avgOverall) / avgOverall * 100).toFixed(2) + '%';
return {
degraded,
avgRecent: avgRecent.toFixed(2) + 'ms',
avgOverall: avgOverall.toFixed(2) + 'ms',
change,
message: degraded ? `Performance degraded by ${change}` : 'Performance stable'
};
}

View File

@@ -84,4 +84,37 @@ export const jobsAPI = {
getStatus: (jobId) => apiRequest(`/jobs/${jobId}`),
getActive: () => apiRequest('/jobs/active'),
getRecent: (limit = 10) => apiRequest(`/jobs?limit=${limit}`),
cancel: (jobId) => apiRequest(`/jobs/${jobId}`, { method: 'DELETE' }),
};
// Admin API
export const adminAPI = {
getStats: () => apiRequest('/admin/stats'),
getUsers: () => apiRequest('/admin/users'),
getUserDetails: (userId) => apiRequest(`/admin/users/${userId}`),
updateUserRole: (userId, isAdmin) => apiRequest(`/admin/users/${userId}/role`, {
method: 'POST',
body: JSON.stringify({ isAdmin }),
}),
deleteUser: (userId) => apiRequest(`/admin/users/${userId}`, {
method: 'DELETE',
}),
impersonate: (userId) => apiRequest(`/admin/impersonate/${userId}`, {
method: 'POST',
}),
stopImpersonation: () => apiRequest('/admin/impersonate/stop', {
method: 'POST',
}),
getImpersonationStatus: () => apiRequest('/admin/impersonation/status'),
getActions: (limit = 50, offset = 0) => apiRequest(`/admin/actions?limit=${limit}&offset=${offset}`),
getMyActions: (limit = 50, offset = 0) => apiRequest(`/admin/actions/my?limit=${limit}&offset=${offset}`),
};

View File

@@ -0,0 +1,192 @@
/**
* Frontend Logger
*
* Sends logs to backend endpoint which writes to logs/frontend.log
* Respects LOG_LEVEL environment variable from backend
*
* Usage:
* import { logger } from '$lib/logger';
* logger.info('User logged in', { userId: 123 });
* logger.error('Failed to fetch data', { error: err.message });
*/
// Log levels matching backend
const LOG_LEVELS = { debug: 0, info: 1, warn: 2, error: 3 };
// Get log level from backend or default to info
let currentLogLevel = LOG_LEVELS.info;
// Buffer for batching logs (sends when buffer reaches this size or after timeout)
const logBuffer = [];
const BUFFER_SIZE = 10;
const BUFFER_TIMEOUT = 5000; // 5 seconds
let bufferTimeout = null;
// Fetch current log level from backend on initialization
async function fetchLogLevel() {
try {
// Try to get log level from health endpoint or localStorage
const response = await fetch('/api/health');
if (response.ok) {
// For now, we'll assume the backend doesn't expose log level in health
// Could add it later. Default to info in production, debug in development
const isDev = import.meta.env.DEV;
currentLogLevel = isDev ? LOG_LEVELS.debug : LOG_LEVELS.info;
}
} catch (err) {
// Default to info if can't fetch
currentLogLevel = LOG_LEVELS.info;
}
}
// Initialize log level
fetchLogLevel();
/**
* Send logs to backend
*/
async function sendLogs(entries) {
try {
await fetch('/api/logs', {
method: 'POST',
headers: {
'Content-Type': 'application/json',
},
credentials: 'include', // Include cookies for authentication
body: JSON.stringify(entries),
});
} catch (err) {
// Silent fail - don't break the app if logging fails
console.error('Failed to send logs to backend:', err);
}
}
/**
* Flush log buffer
*/
function flushBuffer() {
if (logBuffer.length === 0) return;
const entries = [...logBuffer];
logBuffer.length = 0; // Clear buffer
if (bufferTimeout) {
clearTimeout(bufferTimeout);
bufferTimeout = null;
}
sendLogs(entries);
}
/**
* Add log entry to buffer
*/
function addToBuffer(level, message, data) {
// Check if we should log this level
if (LOG_LEVELS[level] < currentLogLevel) return;
logBuffer.push({
level,
message,
data: data || undefined,
timestamp: new Date().toISOString(),
});
// Flush if buffer is full
if (logBuffer.length >= BUFFER_SIZE) {
flushBuffer();
} else {
// Set timeout to flush after BUFFER_TIMEOUT
if (bufferTimeout) {
clearTimeout(bufferTimeout);
}
bufferTimeout = setTimeout(flushBuffer, BUFFER_TIMEOUT);
}
}
/**
* Logger API
*/
export const logger = {
/**
* Log debug message
*/
debug: (message, data) => {
if (import.meta.env.DEV) {
console.debug('[DEBUG]', message, data || '');
}
addToBuffer('debug', message, data);
},
/**
* Log info message
*/
info: (message, data) => {
if (import.meta.env.DEV) {
console.info('[INFO]', message, data || '');
}
addToBuffer('info', message, data);
},
/**
* Log warning message
*/
warn: (message, data) => {
if (import.meta.env.DEV) {
console.warn('[WARN]', message, data || '');
}
addToBuffer('warn', message, data);
},
/**
* Log error message
*/
error: (message, data) => {
if (import.meta.env.DEV) {
console.error('[ERROR]', message, data || '');
}
addToBuffer('error', message, data);
},
/**
* Immediately flush the log buffer
*/
flush: flushBuffer,
/**
* Set the log level (for testing purposes)
*/
setLogLevel: (level) => {
if (LOG_LEVELS[level] !== undefined) {
currentLogLevel = LOG_LEVELS[level];
}
},
};
/**
* SvelteKit action for automatic error logging
* Can be used in +page.svelte or +layout.svelte
*/
export function setupErrorLogging() {
// Log unhandled errors
if (typeof window !== 'undefined') {
window.addEventListener('error', (event) => {
logger.error('Unhandled error', {
message: event.message,
filename: event.filename,
lineno: event.lineno,
colno: event.colno,
error: event.error?.stack,
});
});
window.addEventListener('unhandledrejection', (event) => {
logger.error('Unhandled promise rejection', {
reason: event.reason,
promise: event.promise?.toString(),
});
});
}
}
export default logger;

View File

@@ -27,6 +27,9 @@
<a href="/awards" class="nav-link">Awards</a>
<a href="/qsos" class="nav-link">QSOs</a>
<a href="/settings" class="nav-link">Settings</a>
{#if $auth.user?.isAdmin}
<a href="/admin" class="nav-link admin-link">Admin</a>
{/if}
<button on:click={handleLogout} class="nav-link logout-btn">Logout</button>
</div>
</div>
@@ -119,6 +122,16 @@
background-color: rgba(255, 107, 107, 0.1);
}
.admin-link {
background-color: #ffc107;
color: #000;
font-weight: 600;
}
.admin-link:hover {
background-color: #e0a800;
}
main {
flex: 1;
padding: 2rem 1rem;

View File

@@ -1,14 +1,161 @@
<script>
import { onMount } from 'svelte';
import { onMount, onDestroy, tick } from 'svelte';
import { auth } from '$lib/stores.js';
import { jobsAPI } from '$lib/api.js';
import { browser } from '$app/environment';
onMount(() => {
let jobs = [];
let loading = true;
let cancellingJobs = new Map(); // Track cancelling state per job
let pollingInterval = null;
async function loadJobs() {
try {
const response = await jobsAPI.getRecent(5);
jobs = response.jobs || [];
// Check if we need to update polling state
await tick();
updatePollingState();
} catch (error) {
console.error('Failed to load jobs:', error);
}
}
function hasActiveJobs() {
return jobs.some(job => job.status === 'pending' || job.status === 'running');
}
function updatePollingState() {
if (hasActiveJobs()) {
startPolling();
} else {
stopPolling();
}
}
function startPolling() {
if (pollingInterval) return; // Already polling
pollingInterval = setInterval(async () => {
await loadJobs();
}, 2000); // Poll every 2 seconds
}
function stopPolling() {
if (pollingInterval) {
clearInterval(pollingInterval);
pollingInterval = null;
}
}
onMount(async () => {
// Load user profile on mount if we have a token
if (browser) {
auth.loadProfile();
}
// Load recent jobs if authenticated
if ($auth.user) {
await loadJobs();
loading = false;
}
});
onDestroy(() => {
stopPolling();
});
async function cancelJob(jobId) {
if (!confirm('Are you sure you want to cancel this job? This will rollback all changes made by this sync.')) {
return;
}
cancellingJobs.set(jobId, true);
try {
const result = await jobsAPI.cancel(jobId);
alert(result.message || 'Job cancelled successfully');
// Reload jobs to show updated status
await loadJobs();
} catch (error) {
alert('Failed to cancel job: ' + error.message);
} finally {
cancellingJobs.delete(jobId);
}
}
function canCancelJob(job) {
// Only allow cancelling failed jobs or stale running jobs
if (job.status === 'failed') {
return true;
}
// Allow cancelling stale running jobs (>1 hour)
if (job.status === 'running' && job.startedAt) {
const started = new Date(job.startedAt);
const now = new Date();
const hoursSinceStart = (now - started) / (1000 * 60 * 60);
return hoursSinceStart > 1;
}
return false;
}
function isJobStale(job) {
return job.status === 'running' && job.startedAt &&
(new Date() - new Date(job.startedAt)) > (1000 * 60 * 60);
}
function getJobIcon(type) {
return type === 'lotw_sync' ? '📡' : '🛰️';
}
function getJobLabel(type) {
return type === 'lotw_sync' ? 'LoTW Sync' : 'DCL Sync';
}
function getStatusBadge(status) {
const styles = {
pending: 'bg-yellow-100 text-yellow-800',
running: 'bg-blue-100 text-blue-800',
completed: 'bg-green-100 text-green-800',
failed: 'bg-red-100 text-red-800',
cancelled: 'bg-purple-100 text-purple-800',
};
return styles[status] || 'bg-gray-100 text-gray-800';
}
function formatTime(timestamp) {
if (!timestamp) return '-';
const date = new Date(timestamp);
return date.toLocaleTimeString([], { hour: '2-digit', minute: '2-digit' });
}
function formatDate(timestamp) {
if (!timestamp) return '-';
const date = new Date(timestamp);
const now = new Date();
const diffMs = now - date;
const diffMins = Math.floor(diffMs / 60000);
const diffHours = Math.floor(diffMs / 3600000);
const diffDays = Math.floor(diffMs / 86400000);
if (diffMins < 1) return 'Just now';
if (diffMins < 60) return `${diffMins}m ago`;
if (diffHours < 24) return `${diffHours}h ago`;
if (diffDays < 7) return `${diffDays}d ago`;
return date.toLocaleDateString();
}
function getDuration(job) {
if (!job.startedAt || !job.completedAt) return null;
const diff = new Date(job.completedAt) - new Date(job.startedAt);
const seconds = Math.floor(diff / 1000);
if (seconds < 60) return `${seconds}s`;
const minutes = Math.floor(seconds / 60);
return `${minutes}m ${seconds % 60}s`;
}
</script>
<div class="container">
@@ -40,6 +187,99 @@
</div>
</div>
<!-- Recent Sync Jobs -->
<div class="jobs-section">
<h2 class="section-title">🔄 Recent Sync Jobs</h2>
{#if loading}
<div class="loading-state">Loading jobs...</div>
{:else if jobs.length === 0}
<div class="empty-state">
<p>No sync jobs yet. Sync your QSOs from LoTW or DCL to get started!</p>
<div class="empty-actions">
<a href="/settings" class="btn btn-secondary">Configure Credentials</a>
<a href="/qsos" class="btn btn-primary">Sync QSOs</a>
</div>
</div>
{:else}
<div class="jobs-list">
{#each jobs as job (job.id)}
<div class="job-card" class:failed={job.status === 'failed'}>
<div class="job-header">
<div class="job-title">
<span class="job-icon">{getJobIcon(job.type)}</span>
<span class="job-name">{getJobLabel(job.type)}</span>
<span class="job-id">#{job.id}</span>
</div>
<span class="status-badge {getStatusBadge(job.status)}">
{job.status}
</span>
</div>
<div class="job-meta">
<span class="job-date" title={new Date(job.createdAt).toLocaleString()}>
{formatDate(job.createdAt)}
</span>
{#if job.startedAt}
<span class="job-time">{formatTime(job.startedAt)}</span>
{/if}
{#if getDuration(job)}
<span class="job-duration">({getDuration(job)})</span>
{/if}
</div>
{#if job.status === 'failed' && job.error}
<div class="job-error">
{job.error}
</div>
{:else if job.result}
<div class="job-stats">
{#if job.result.total !== undefined}
<span class="stat-item">
<strong>{job.result.total}</strong> total
</span>
{/if}
{#if job.result.added !== undefined && job.result.added > 0}
<span class="stat-item stat-added">
+{job.result.added} added
</span>
{/if}
{#if job.result.updated !== undefined && job.result.updated > 0}
<span class="stat-item stat-updated">
~{job.result.updated} updated
</span>
{/if}
{#if job.result.skipped !== undefined && job.result.skipped > 0}
<span class="stat-item stat-skipped">
{job.result.skipped} skipped
</span>
{/if}
</div>
{:else if job.status === 'running' || job.status === 'pending'}
<div class="job-progress">
<span class="progress-text">
{job.status === 'pending' ? 'Waiting to start...' : isJobStale(job) ? 'Stale - no progress for over 1 hour' : 'Processing...'}
</span>
</div>
{/if}
<!-- Cancel button for eligible jobs -->
{#if canCancelJob(job)}
<div class="job-actions">
<button
class="btn-cancel"
disabled={cancellingJobs.get(job.id)}
on:click|stopPropagation={() => cancelJob(job.id)}
>
{cancellingJobs.get(job.id) ? 'Cancelling...' : 'Cancel & Rollback'}
</button>
</div>
{/if}
</div>
{/each}
</div>
{/if}
</div>
<div class="info-box">
<h3>Getting Started</h3>
<ol>
@@ -191,4 +431,232 @@
color: #666;
line-height: 1.8;
}
/* Jobs Section */
.jobs-section {
margin-bottom: 2rem;
}
.section-title {
font-size: 1.5rem;
color: #333;
margin-bottom: 1rem;
}
.loading-state,
.empty-state {
background: white;
border: 1px solid #e0e0e0;
border-radius: 8px;
padding: 2rem;
text-align: center;
color: #666;
}
.empty-actions {
display: flex;
gap: 1rem;
justify-content: center;
margin-top: 1.5rem;
flex-wrap: wrap;
}
.jobs-list {
display: flex;
flex-direction: column;
gap: 1rem;
}
.job-card {
background: white;
border: 1px solid #e0e0e0;
border-radius: 8px;
padding: 1rem 1.25rem;
box-shadow: 0 1px 3px rgba(0, 0, 0, 0.1);
transition: box-shadow 0.2s;
}
.job-card:hover {
box-shadow: 0 2px 8px rgba(0, 0, 0, 0.12);
}
.job-card.failed {
border-left: 4px solid #dc3545;
}
.job-header {
display: flex;
justify-content: space-between;
align-items: center;
margin-bottom: 0.5rem;
}
.job-title {
display: flex;
align-items: center;
gap: 0.5rem;
}
.job-icon {
font-size: 1.5rem;
}
.job-name {
font-weight: 600;
color: #333;
font-size: 1.1rem;
}
.job-id {
font-size: 0.85rem;
color: #999;
font-family: monospace;
}
.status-badge {
padding: 0.25rem 0.75rem;
border-radius: 12px;
font-size: 0.85rem;
font-weight: 500;
text-transform: capitalize;
}
.bg-yellow-100 {
background-color: #fef3c7;
}
.bg-blue-100 {
background-color: #dbeafe;
}
.bg-green-100 {
background-color: #d1fae5;
}
.bg-red-100 {
background-color: #fee2e2;
}
.text-yellow-800 {
color: #92400e;
}
.text-blue-800 {
color: #1e40af;
}
.text-green-800 {
color: #065f46;
}
.text-red-800 {
color: #991b1b;
}
.bg-purple-100 {
background-color: #f3e8ff;
}
.text-purple-800 {
color: #6b21a8;
}
.job-meta {
display: flex;
gap: 0.75rem;
font-size: 0.9rem;
color: #666;
margin-bottom: 0.5rem;
flex-wrap: wrap;
}
.job-date {
font-weight: 500;
}
.job-time,
.job-duration {
color: #999;
}
.job-error {
background: #fee2e2;
color: #991b1b;
padding: 0.75rem;
border-radius: 4px;
font-size: 0.95rem;
margin-top: 0.5rem;
}
.job-stats {
display: flex;
gap: 1rem;
flex-wrap: wrap;
margin-top: 0.5rem;
}
.stat-item {
font-size: 0.9rem;
color: #666;
padding: 0.25rem 0.5rem;
background: #f8f9fa;
border-radius: 4px;
}
.stat-item strong {
color: #333;
}
.stat-added {
color: #065f46;
background: #d1fae5;
}
.stat-updated {
color: #1e40af;
background: #dbeafe;
}
.stat-skipped {
color: #92400e;
background: #fef3c7;
}
.job-progress {
margin-top: 0.5rem;
}
.progress-text {
color: #1e40af;
font-size: 0.9rem;
font-style: italic;
}
.job-actions {
margin-top: 0.75rem;
display: flex;
justify-content: flex-end;
}
.btn-cancel {
padding: 0.4rem 0.8rem;
font-size: 0.85rem;
border: 1px solid #dc3545;
background: white;
color: #dc3545;
border-radius: 4px;
cursor: pointer;
transition: all 0.2s;
font-weight: 500;
}
.btn-cancel:hover:not(:disabled) {
background: #dc3545;
color: white;
}
.btn-cancel:disabled {
opacity: 0.6;
cursor: not-allowed;
}
</style>

File diff suppressed because it is too large Load Diff

View File

@@ -2,7 +2,10 @@
import { onMount } from 'svelte';
import { auth } from '$lib/stores.js';
let allAwards = [];
let awards = [];
let categories = [];
let selectedCategory = 'all';
let loading = true;
let error = null;
@@ -16,69 +19,94 @@
error = null;
// Get awards from API
const response = await fetch('/api/awards', {
const awardsResponse = await fetch('/api/awards', {
headers: {
'Authorization': `Bearer ${$auth.token}`,
},
});
if (!response.ok) {
if (!awardsResponse.ok) {
throw new Error('Failed to load awards');
}
const data = await response.json();
const awardsData = await awardsResponse.json();
if (!data.success) {
throw new Error(data.error || 'Failed to load awards');
if (!awardsData.success) {
throw new Error(awardsData.error || 'Failed to load awards');
}
// Load progress for each award
awards = await Promise.all(
data.awards.map(async (award) => {
try {
const progressResponse = await fetch(`/api/awards/${award.id}/progress`, {
headers: {
'Authorization': `Bearer ${$auth.token}`,
},
});
// Get progress for all awards in a single batch request (fixes N+1 problem)
const progressResponse = await fetch('/api/awards/batch/progress', {
headers: {
'Authorization': `Bearer ${$auth.token}`,
},
});
if (progressResponse.ok) {
const progressData = await progressResponse.json();
if (progressData.success) {
return {
...award,
progress: progressData,
};
}
}
} catch (e) {
console.error(`Failed to load progress for ${award.id}:`, e);
}
let progressMap = {};
if (progressResponse.ok) {
const progressData = await progressResponse.json();
if (progressData.success && progressData.awards) {
// Create a map of awardId -> progress for quick lookup
progressMap = Object.fromEntries(
progressData.awards.map(p => [p.awardId, p])
);
}
}
// Return award without progress if it failed
return {
...award,
progress: {
worked: 0,
confirmed: 0,
target: award.rules?.target || 0,
percentage: 0,
},
};
})
);
// Combine awards with their progress
allAwards = awardsData.awards.map(award => ({
...award,
progress: progressMap[award.id] || {
worked: 0,
confirmed: 0,
target: award.rules?.target || 0,
percentage: 0,
},
}));
// Extract unique categories
categories = ['all', ...new Set(allAwards.map(a => a.category).filter(Boolean))];
// Apply filter
applyFilter();
} catch (e) {
error = e.message;
} finally {
loading = false;
}
}
function applyFilter() {
if (selectedCategory === 'all') {
awards = allAwards;
} else {
awards = allAwards.filter(award => award.category === selectedCategory);
}
}
function onCategoryChange(event) {
selectedCategory = event.target.value;
applyFilter();
}
</script>
<div class="container">
<h1>Awards</h1>
<p class="subtitle">Track your ham radio award progress</p>
{#if !loading && awards.length > 0}
<div class="filters">
<div class="filter-group">
<label for="category-filter">Category:</label>
<select id="category-filter" value={selectedCategory} on:change={onCategoryChange}>
{#each categories as category}
<option value={category}>{category === 'all' ? 'All Awards' : category.toUpperCase()}</option>
{/each}
</select>
</div>
</div>
{/if}
{#if loading}
<div class="loading">Loading awards...</div>
{:else if error}
@@ -162,6 +190,45 @@
margin-bottom: 2rem;
}
.filters {
display: flex;
gap: 1rem;
margin-bottom: 2rem;
align-items: center;
}
.filter-group {
display: flex;
align-items: center;
gap: 0.5rem;
}
.filter-group label {
font-weight: 600;
color: #333;
font-size: 0.9rem;
}
.filter-group select {
padding: 0.5rem 1rem;
border: 1px solid #ccc;
border-radius: 4px;
background-color: white;
font-size: 0.9rem;
cursor: pointer;
min-width: 150px;
}
.filter-group select:hover {
border-color: #4a90e2;
}
.filter-group select:focus {
outline: none;
border-color: #4a90e2;
box-shadow: 0 0 0 2px rgba(74, 144, 226, 0.2);
}
.loading,
.error,
.empty {

View File

@@ -7,10 +7,14 @@
let entities = [];
let loading = true;
let error = null;
let sort = 'name'; // name
let groupedData = [];
let bands = [];
// QSO detail modal state
let selectedQSO = null;
let showQSODetailModal = false;
let loadingQSO = false;
onMount(async () => {
await loadAwardData();
});
@@ -79,8 +83,10 @@
// Add QSO info to this band
entityData.bands.get(entity.band).push({
qsoId: entity.qsoId,
callsign: entity.callsign,
mode: entity.mode,
band: entity.band,
confirmed: entity.confirmed,
qsoDate: entity.qsoDate,
});
@@ -126,8 +132,10 @@
}
entityData.bands.get(entity.band).push({
qsoId: entity.qsoId,
callsign: entity.callsign,
mode: entity.mode,
band: entity.band,
confirmed: entity.confirmed,
qsoDate: entity.qsoDate,
});
@@ -155,6 +163,93 @@
$: if (entities.length > 0) {
applyFilter();
}
// Calculate band sums
$: bandSums = (() => {
const sums = new Map();
const hasPoints = entities.length > 0 && entities[0].points !== undefined;
bands.forEach(band => {
if (hasPoints) {
// Sum points for confirmed QSOs in this band
const sum = entities
.filter(e => e.band === band && e.confirmed)
.reduce((total, e) => total + (e.points || 0), 0);
sums.set(band, sum);
} else {
// Count confirmed QSOs in this band
const count = entities.filter(e => e.band === band && e.confirmed).length;
sums.set(band, count);
}
});
return sums;
})();
// QSO Detail Modal Functions
async function openQSODetailModal(qso) {
loadingQSO = true;
showQSODetailModal = true;
selectedQSO = null;
try {
// Fetch full QSO details by ID
const response = await fetch(`/api/qsos/${qso.qsoId}`, {
headers: {
'Authorization': `Bearer ${$auth.token}`,
},
});
if (!response.ok) {
throw new Error('Failed to fetch QSO details');
}
const data = await response.json();
if (!data.success) {
throw new Error(data.error || 'Failed to fetch QSO details');
}
if (data.qso) {
selectedQSO = data.qso;
} else {
throw new Error('QSO not found');
}
} catch (err) {
console.error('Failed to load QSO details:', err);
alert('Failed to load QSO details: ' + err.message);
showQSODetailModal = false;
} finally {
loadingQSO = false;
}
}
function closeQSODetailModal() {
selectedQSO = null;
showQSODetailModal = false;
}
function formatDate(dateStr) {
if (!dateStr) return '-';
// ADIF format: YYYYMMDD
const year = dateStr.substring(0, 4);
const month = dateStr.substring(4, 6);
const day = dateStr.substring(6, 8);
return `${day}/${month}/${year}`;
}
function formatTime(timeStr) {
if (!timeStr) return '-';
// ADIF format: HHMMSS or HHMM
return timeStr.substring(0, 2) + ':' + timeStr.substring(2, 4);
}
function getConfirmationStatus(status) {
if (status === 'Y') return { label: 'Confirmed', class: 'confirmed' };
if (status === 'N') return { label: 'Not Confirmed', class: 'not-confirmed' };
if (status === '?') return { label: 'Unknown', class: 'unknown' };
return { label: 'No Data', class: 'no-data' };
}
</script>
<div class="container">
@@ -175,15 +270,6 @@
<a href="/awards" class="back-link">← Back to Awards</a>
</div>
<div class="controls">
<div class="sort-group">
<label>Sort by:</label>
<select bind:value={sort}>
<option value="name">Name</option>
</select>
</div>
</div>
<div class="summary">
{#if entities.length > 0 && entities[0].points !== undefined}
{@const earnedPoints = entities.reduce((sum, e) => sum + (e.confirmed ? e.points : 0), 0)}
@@ -257,7 +343,13 @@
{#if qsos.length > 0}
<div class="qso-list">
{#each qsos as qso}
<div class="qso-entry {qso.confirmed ? 'qso-confirmed' : 'qso-worked'}">
<div
class="qso-entry {qso.confirmed ? 'qso-confirmed' : 'qso-worked'}"
on:click={() => openQSODetailModal(qso)}
on:keydown={(e) => e.key === 'Enter' && openQSODetailModal(qso)}
role="button"
tabindex="0"
>
<span class="callsign">{qso.callsign}</span>
<span class="mode">{qso.mode}</span>
</div>
@@ -271,12 +363,207 @@
</tr>
{/each}
</tbody>
<tfoot>
<tr class="sum-row">
<td class="sum-label">
<strong>Sum</strong>
</td>
{#each bands as band}
{@const sum = bandSums.get(band) ?? 0}
<td class="sum-cell">
<strong>{sum}</strong>
</td>
{/each}
</tr>
</tfoot>
</table>
{/if}
</div>
{/if}
</div>
<!-- QSO Detail Modal -->
{#if showQSODetailModal && selectedQSO}
<div class="modal-backdrop" on:click={closeQSODetailModal} on:keydown={(e) => e.key === 'Escape' && closeQSODetailModal()} role="dialog" aria-modal="true">
<div class="modal-content" on:click|stopPropagation>
<div class="modal-header">
<h2>QSO Details</h2>
<button class="modal-close" on:click={closeQSODetailModal} aria-label="Close modal">×</button>
</div>
<div class="modal-body">
{#if loadingQSO}
<div class="loading-modal">Loading QSO details...</div>
{:else}
<div class="qso-detail-section">
<h3>Basic Information</h3>
<div class="detail-grid">
<div class="detail-item">
<span class="detail-label">Callsign:</span>
<span class="detail-value">{selectedQSO.callsign}</span>
</div>
<div class="detail-item">
<span class="detail-label">Date:</span>
<span class="detail-value">{formatDate(selectedQSO.qsoDate)}</span>
</div>
<div class="detail-item">
<span class="detail-label">Time:</span>
<span class="detail-value">{formatTime(selectedQSO.timeOn)}</span>
</div>
<div class="detail-item">
<span class="detail-label">Band:</span>
<span class="detail-value">{selectedQSO.band || '-'}</span>
</div>
<div class="detail-item">
<span class="detail-label">Mode:</span>
<span class="detail-value">{selectedQSO.mode || '-'}</span>
</div>
</div>
</div>
<div class="qso-detail-section">
<h3>Location</h3>
<div class="detail-grid">
<div class="detail-item">
<span class="detail-label">Entity:</span>
<span class="detail-value">{selectedQSO.entity || '-'}</span>
</div>
{#if selectedQSO.entityId}
<div class="detail-item">
<span class="detail-label">DXCC ID:</span>
<span class="detail-value">{selectedQSO.entityId}</span>
</div>
{/if}
<div class="detail-item">
<span class="detail-label">Grid Square:</span>
<span class="detail-value">{selectedQSO.grid || '-'}</span>
</div>
{#if selectedQSO.gridSource}
<div class="detail-item">
<span class="detail-label">Grid Source:</span>
<span class="detail-value">{selectedQSO.gridSource}</span>
</div>
{/if}
<div class="detail-item">
<span class="detail-label">Continent:</span>
<span class="detail-value">{selectedQSO.continent || '-'}</span>
</div>
{#if selectedQSO.cqZone}
<div class="detail-item">
<span class="detail-label">CQ Zone:</span>
<span class="detail-value">{selectedQSO.cqZone}</span>
</div>
{/if}
{#if selectedQSO.ituZone}
<div class="detail-item">
<span class="detail-label">ITU Zone:</span>
<span class="detail-value">{selectedQSO.ituZone}</span>
</div>
{/if}
{#if selectedQSO.state}
<div class="detail-item">
<span class="detail-label">State:</span>
<span class="detail-value">{selectedQSO.state}</span>
</div>
{/if}
{#if selectedQSO.county}
<div class="detail-item">
<span class="detail-label">County:</span>
<span class="detail-value">{selectedQSO.county}</span>
</div>
{/if}
</div>
</div>
{#if selectedQSO.satName}
<div class="qso-detail-section">
<h3>Satellite</h3>
<div class="detail-grid">
<div class="detail-item">
<span class="detail-label">Satellite:</span>
<span class="detail-value">{selectedQSO.satName}</span>
</div>
{#if selectedQSO.satMode}
<div class="detail-item">
<span class="detail-label">Mode:</span>
<span class="detail-value">{selectedQSO.satMode}</span>
</div>
{/if}
</div>
</div>
{/if}
<div class="qso-detail-section">
<h3>DOK Information</h3>
<div class="detail-grid">
<div class="detail-item">
<span class="detail-label">Partner's DOK:</span>
<span class="detail-value">{selectedQSO.darcDok || '-'}</span>
</div>
<div class="detail-item">
<span class="detail-label">My DOK:</span>
<span class="detail-value">{selectedQSO.myDarcDok || '-'}</span>
</div>
</div>
</div>
<div class="qso-detail-section">
<h3>Confirmation Status</h3>
<div class="confirmation-details">
<div class="confirmation-service">
<h4>LoTW</h4>
<div class="confirmation-status-item">
<span class="detail-label">Status:</span>
<span class="status-badge {getConfirmationStatus(selectedQSO.lotwQslRstatus).class}">
{getConfirmationStatus(selectedQSO.lotwQslRstatus).label}
</span>
</div>
{#if selectedQSO.lotwQslRdate}
<div class="confirmation-status-item">
<span class="detail-label">Confirmed:</span>
<span class="detail-value">{formatDate(selectedQSO.lotwQslRdate)}</span>
</div>
{/if}
{#if selectedQSO.lotwSyncedAt}
<div class="confirmation-status-item">
<span class="detail-label">Last Synced:</span>
<span class="detail-value">{new Date(selectedQSO.lotwSyncedAt).toLocaleString()}</span>
</div>
{/if}
</div>
<div class="confirmation-service">
<h4>DCL</h4>
<div class="confirmation-status-item">
<span class="detail-label">Status:</span>
<span class="status-badge {getConfirmationStatus(selectedQSO.dclQslRstatus).class}">
{getConfirmationStatus(selectedQSO.dclQslRstatus).label}
</span>
</div>
{#if selectedQSO.dclQslRdate}
<div class="confirmation-status-item">
<span class="detail-label">Confirmed:</span>
<span class="detail-value">{formatDate(selectedQSO.dclQslRdate)}</span>
</div>
{/if}
</div>
</div>
</div>
<div class="qso-detail-section meta-info">
<span class="meta-label">QSO ID:</span>
<span class="meta-value">{selectedQSO.id}</span>
{#if selectedQSO.createdAt}
<span class="meta-label">Created:</span>
<span class="meta-value">{new Date(selectedQSO.createdAt).toLocaleString()}</span>
{/if}
</div>
{/if}
</div>
</div>
</div>
{/if}
<style>
.container {
max-width: 1200px;
@@ -307,6 +594,28 @@
padding: 0px 3px 0px 3px;
}
.award-table tfoot {
background-color: #f5f5f5;
border-top: 2px solid #333;
}
.sum-row td {
border: 1px solid #000;
padding: 8px 5px;
text-align: center;
background-color: #f5f5f5;
font-weight: 600;
}
.sum-label {
text-align: left !important;
color: #333;
}
.sum-cell {
color: #4a90e2;
}
.award-header {
margin-bottom: 2rem;
}
@@ -345,34 +654,6 @@
text-decoration: underline;
}
.controls {
display: flex;
gap: 2rem;
margin-bottom: 2rem;
flex-wrap: wrap;
}
.filter-group,
.sort-group {
display: flex;
align-items: center;
gap: 0.5rem;
}
label {
font-weight: 600;
color: #333;
}
select {
padding: 0.5rem;
border: 1px solid #ddd;
border-radius: 4px;
background-color: white;
font-size: 1rem;
cursor: pointer;
}
.summary {
display: grid;
grid-template-columns: repeat(auto-fit, minmax(150px, 1fr));
@@ -536,4 +817,224 @@
.btn:hover {
background-color: #357abd;
}
/* QSO Detail Modal Styles */
.qso-entry {
cursor: pointer;
transition: background-color 0.2s;
padding: 0.25rem 0.5rem;
border-radius: 4px;
}
.qso-entry:hover {
background-color: #f0f7ff;
}
.qso-entry:focus {
outline: 2px solid #4a90e2;
outline-offset: -2px;
}
/* Modal Backdrop */
.modal-backdrop {
position: fixed;
top: 0;
left: 0;
right: 0;
bottom: 0;
background-color: rgba(0, 0, 0, 0.5);
display: flex;
align-items: center;
justify-content: center;
z-index: 1000;
padding: 1rem;
}
/* Modal Content */
.modal-content {
background: white;
border-radius: 8px;
max-width: 700px;
width: 100%;
max-height: 90vh;
overflow-y: auto;
box-shadow: 0 4px 20px rgba(0, 0, 0, 0.15);
}
/* Modal Header */
.modal-header {
display: flex;
justify-content: space-between;
align-items: center;
padding: 1.5rem;
border-bottom: 1px solid #e0e0e0;
}
.modal-header h2 {
margin: 0;
font-size: 1.5rem;
color: #333;
}
.modal-close {
background: none;
border: none;
font-size: 2rem;
line-height: 1;
cursor: pointer;
padding: 0;
width: 2rem;
height: 2rem;
display: flex;
align-items: center;
justify-content: center;
color: #666;
border-radius: 4px;
transition: all 0.2s;
}
.modal-close:hover {
background-color: #f0f0f0;
color: #333;
}
/* Modal Body */
.modal-body {
padding: 1.5rem;
}
/* Loading State */
.loading-modal {
text-align: center;
padding: 3rem;
color: #666;
}
/* Detail Sections */
.qso-detail-section {
margin-bottom: 2rem;
}
.qso-detail-section:last-child {
margin-bottom: 0;
}
.qso-detail-section h3 {
font-size: 1.1rem;
color: #4a90e2;
margin: 0 0 1rem 0;
padding-bottom: 0.5rem;
border-bottom: 2px solid #e0e0e0;
}
/* Detail Grid */
.detail-grid {
display: grid;
grid-template-columns: repeat(auto-fit, minmax(200px, 1fr));
gap: 1rem;
}
.detail-item {
display: flex;
flex-direction: column;
gap: 0.25rem;
}
.detail-label {
font-size: 0.75rem;
color: #666;
text-transform: uppercase;
font-weight: 600;
}
.detail-value {
font-size: 0.95rem;
color: #333;
font-weight: 500;
}
/* Confirmation Details */
.confirmation-details {
display: grid;
grid-template-columns: repeat(auto-fit, minmax(250px, 1fr));
gap: 2rem;
}
.confirmation-service h4 {
font-size: 1rem;
color: #333;
margin: 0 0 1rem 0;
padding-bottom: 0.5rem;
border-bottom: 1px solid #e0e0e0;
}
.confirmation-status-item {
display: flex;
justify-content: space-between;
align-items: center;
margin-bottom: 0.75rem;
}
/* Status Badges */
.status-badge {
padding: 0.25rem 0.75rem;
border-radius: 12px;
font-size: 0.75rem;
font-weight: 600;
text-transform: uppercase;
}
.status-badge.confirmed {
background-color: #4a90e2;
color: white;
}
.status-badge.not-confirmed,
.status-badge.no-data {
background-color: #e0e0e0;
color: #666;
}
.status-badge.unknown {
background-color: #fff3cd;
color: #856404;
}
/* Meta Info */
.meta-info {
display: flex;
gap: 2rem;
padding: 1rem;
background-color: #f8f9fa;
border-radius: 4px;
font-size: 0.8rem;
}
.meta-label {
color: #666;
font-weight: 600;
}
.meta-value {
color: #333;
font-family: monospace;
}
/* Scrollbar Styling for Modal */
.modal-content::-webkit-scrollbar {
width: 8px;
}
.modal-content::-webkit-scrollbar-track {
background: #f1f1f1;
}
.modal-content::-webkit-scrollbar-thumb {
background: #888;
border-radius: 4px;
}
.modal-content::-webkit-scrollbar-thumb:hover {
background: #555;
}
</style>

View File

@@ -2,6 +2,8 @@
import { onMount, onDestroy } from 'svelte';
import { qsosAPI, jobsAPI } from '$lib/api.js';
import { auth } from '$lib/stores.js';
import QSOStats from './components/QSOStats.svelte';
import SyncButton from './components/SyncButton.svelte';
let qsos = [];
let stats = null;
@@ -33,9 +35,15 @@
let deleteConfirmText = '';
let deleting = false;
// QSO detail modal state
let selectedQSO = null;
let showQSODetailModal = false;
let filters = {
band: '',
mode: ''
mode: '',
confirmationType: 'all',
search: ''
};
// Load QSOs on mount
@@ -65,6 +73,10 @@
const activeFilters = {};
if (filters.band) activeFilters.band = filters.band;
if (filters.mode) activeFilters.mode = filters.mode;
if (filters.confirmationType && filters.confirmationType !== 'all') {
activeFilters.confirmationType = filters.confirmationType;
}
if (filters.search) activeFilters.search = filters.search;
activeFilters.page = currentPage;
activeFilters.limit = pageSize;
@@ -261,11 +273,34 @@
currentPage = 1;
filters = {
band: '',
mode: ''
mode: '',
confirmationType: 'all',
search: ''
};
loadQSOs();
}
function hasActiveFilters() {
return !!(filters.band || filters.mode || filters.search || (filters.confirmationType && filters.confirmationType !== 'all'));
}
function openQSODetailModal(qso) {
selectedQSO = qso;
showQSODetailModal = true;
}
function closeQSODetailModal() {
selectedQSO = null;
showQSODetailModal = false;
}
function getConfirmationStatus(status) {
if (status === 'Y') return { label: 'Confirmed', class: 'confirmed' };
if (status === 'N') return { label: 'Not Confirmed', class: 'not-confirmed' };
if (status === '?') return { label: 'Unknown', class: 'unknown' };
return { label: 'No Data', class: 'no-data' };
}
function goToPage(page) {
currentPage = page;
loadQSOs();
@@ -351,28 +386,20 @@
Clear All QSOs
</button>
{/if}
<button
class="btn btn-primary lotw-btn"
on:click={handleLoTWSync}
disabled={lotwSyncStatus === 'running' || lotwSyncStatus === 'pending' || deleting}
>
{#if lotwSyncStatus === 'running' || lotwSyncStatus === 'pending'}
LoTW Syncing...
{:else}
Sync from LoTW
{/if}
</button>
<button
class="btn btn-primary dcl-btn"
on:click={handleDCLSync}
disabled={dclSyncStatus === 'running' || dclSyncStatus === 'pending' || deleting}
>
{#if dclSyncStatus === 'running' || dclSyncStatus === 'pending'}
DCL Syncing...
{:else}
Sync from DCL
{/if}
</button>
<SyncButton
service="lotw"
syncStatus={lotwSyncStatus}
{deleting}
onSync={handleLoTWSync}
/>
<SyncButton
service="dcl"
syncStatus={dclSyncStatus}
{deleting}
onSync={handleDCLSync}
/>
</div>
</div>
@@ -510,30 +537,30 @@
</div>
{/if}
{#if stats}
<div class="stats-grid">
<div class="stat-card">
<div class="stat-value">{stats.total}</div>
<div class="stat-label">Total QSOs</div>
</div>
<div class="stat-card">
<div class="stat-value">{stats.confirmed}</div>
<div class="stat-label">Confirmed</div>
</div>
<div class="stat-card">
<div class="stat-value">{stats.uniqueEntities}</div>
<div class="stat-label">DXCC Entities</div>
</div>
<div class="stat-card">
<div class="stat-value">{stats.uniqueBands}</div>
<div class="stat-label">Bands</div>
</div>
</div>
{/if}
<QSOStats stats={stats} />
<div class="filters">
<h3>Filters</h3>
<div class="filters-header">
<h3>Filters</h3>
{#if pagination && !loading}
<div class="filter-count">
{#if hasActiveFilters()}
Showing <strong>{pagination.totalCount}</strong> filtered QSOs
{:else}
Showing <strong>{pagination.totalCount}</strong> total QSOs
{/if}
</div>
{/if}
</div>
<div class="filter-row">
<input
type="text"
bind:value={filters.search}
placeholder="Search callsign, entity, grid..."
class="search-input"
on:keydown={(e) => e.key === 'Enter' && applyFilters()}
/>
<select bind:value={filters.band} on:change={applyFilters}>
<option value="">All Bands</option>
{#each bands as band}
@@ -548,6 +575,15 @@
{/each}
</select>
<select bind:value={filters.confirmationType} on:change={applyFilters} class="confirmation-filter">
<option value="all">All QSOs</option>
<option value="any">Confirmed by at least 1 service</option>
<option value="lotw">LoTW Only</option>
<option value="dcl">DCL Only</option>
<option value="both">Both Confirmed</option>
<option value="none">Not Confirmed</option>
</select>
<button class="btn btn-secondary" on:click={clearFilters}>Clear</button>
</div>
</div>
@@ -579,7 +615,7 @@
</thead>
<tbody>
{#each qsos as qso}
<tr>
<tr on:click={() => openQSODetailModal(qso)} class="qso-row" on:keydown={(e) => e.key === 'Enter' && openQSODetailModal(qso)} role="button" tabindex="0">
<td>{formatDate(qso.qsoDate)}</td>
<td>{formatTime(qso.timeOn)}</td>
<td class="callsign">{qso.callsign}</td>
@@ -661,6 +697,184 @@
{/if}
</div>
<!-- QSO Detail Modal -->
{#if showQSODetailModal && selectedQSO}
<div class="modal-backdrop" on:click={closeQSODetailModal} on:keydown={(e) => e.key === 'Escape' && closeQSODetailModal()} role="dialog" aria-modal="true">
<div class="modal-content" on:click|stopPropagation>
<div class="modal-header">
<h2>QSO Details</h2>
<button class="modal-close" on:click={closeQSODetailModal} aria-label="Close modal">×</button>
</div>
<div class="modal-body">
<div class="qso-detail-section">
<h3>Basic Information</h3>
<div class="detail-grid">
<div class="detail-item">
<span class="detail-label">Callsign:</span>
<span class="detail-value">{selectedQSO.callsign}</span>
</div>
<div class="detail-item">
<span class="detail-label">Date:</span>
<span class="detail-value">{formatDate(selectedQSO.qsoDate)}</span>
</div>
<div class="detail-item">
<span class="detail-label">Time:</span>
<span class="detail-value">{formatTime(selectedQSO.timeOn)}</span>
</div>
<div class="detail-item">
<span class="detail-label">Band:</span>
<span class="detail-value">{selectedQSO.band || '-'}</span>
</div>
<div class="detail-item">
<span class="detail-label">Mode:</span>
<span class="detail-value">{selectedQSO.mode || '-'}</span>
</div>
</div>
</div>
<div class="qso-detail-section">
<h3>Location</h3>
<div class="detail-grid">
<div class="detail-item">
<span class="detail-label">Entity:</span>
<span class="detail-value">{selectedQSO.entity || '-'}</span>
</div>
{#if selectedQSO.entityId}
<div class="detail-item">
<span class="detail-label">DXCC ID:</span>
<span class="detail-value">{selectedQSO.entityId}</span>
</div>
{/if}
<div class="detail-item">
<span class="detail-label">Grid Square:</span>
<span class="detail-value">{selectedQSO.grid || '-'}</span>
</div>
{#if selectedQSO.gridSource}
<div class="detail-item">
<span class="detail-label">Grid Source:</span>
<span class="detail-value">{selectedQSO.gridSource}</span>
</div>
{/if}
<div class="detail-item">
<span class="detail-label">Continent:</span>
<span class="detail-value">{selectedQSO.continent || '-'}</span>
</div>
{#if selectedQSO.cqZone}
<div class="detail-item">
<span class="detail-label">CQ Zone:</span>
<span class="detail-value">{selectedQSO.cqZone}</span>
</div>
{/if}
{#if selectedQSO.ituZone}
<div class="detail-item">
<span class="detail-label">ITU Zone:</span>
<span class="detail-value">{selectedQSO.ituZone}</span>
</div>
{/if}
{#if selectedQSO.state}
<div class="detail-item">
<span class="detail-label">State:</span>
<span class="detail-value">{selectedQSO.state}</span>
</div>
{/if}
{#if selectedQSO.county}
<div class="detail-item">
<span class="detail-label">County:</span>
<span class="detail-value">{selectedQSO.county}</span>
</div>
{/if}
</div>
</div>
{#if selectedQSO.satName}
<div class="qso-detail-section">
<h3>Satellite</h3>
<div class="detail-grid">
<div class="detail-item">
<span class="detail-label">Satellite:</span>
<span class="detail-value">{selectedQSO.satName}</span>
</div>
{#if selectedQSO.satMode}
<div class="detail-item">
<span class="detail-label">Mode:</span>
<span class="detail-value">{selectedQSO.satMode}</span>
</div>
{/if}
</div>
</div>
{/if}
<div class="qso-detail-section">
<h3>DOK Information</h3>
<div class="detail-grid">
<div class="detail-item">
<span class="detail-label">Partner's DOK:</span>
<span class="detail-value">{selectedQSO.darcDok || '-'}</span>
</div>
<div class="detail-item">
<span class="detail-label">My DOK:</span>
<span class="detail-value">{selectedQSO.myDarcDok || '-'}</span>
</div>
</div>
</div>
<div class="qso-detail-section">
<h3>Confirmation Status</h3>
<div class="confirmation-details">
<div class="confirmation-service">
<h4>LoTW</h4>
<div class="confirmation-status-item">
<span class="detail-label">Status:</span>
<span class="status-badge {getConfirmationStatus(selectedQSO.lotwQslRstatus).class}">
{getConfirmationStatus(selectedQSO.lotwQslRstatus).label}
</span>
</div>
{#if selectedQSO.lotwQslRdate}
<div class="confirmation-status-item">
<span class="detail-label">Confirmed:</span>
<span class="detail-value">{formatDate(selectedQSO.lotwQslRdate)}</span>
</div>
{/if}
{#if selectedQSO.lotwSyncedAt}
<div class="confirmation-status-item">
<span class="detail-label">Last Synced:</span>
<span class="detail-value">{new Date(selectedQSO.lotwSyncedAt).toLocaleString()}</span>
</div>
{/if}
</div>
<div class="confirmation-service">
<h4>DCL</h4>
<div class="confirmation-status-item">
<span class="detail-label">Status:</span>
<span class="status-badge {getConfirmationStatus(selectedQSO.dclQslRstatus).class}">
{getConfirmationStatus(selectedQSO.dclQslRstatus).label}
</span>
</div>
{#if selectedQSO.dclQslRdate}
<div class="confirmation-status-item">
<span class="detail-label">Confirmed:</span>
<span class="detail-value">{formatDate(selectedQSO.dclQslRdate)}</span>
</div>
{/if}
</div>
</div>
</div>
<div class="qso-detail-section meta-info">
<span class="meta-label">QSO ID:</span>
<span class="meta-value">{selectedQSO.id}</span>
{#if selectedQSO.createdAt}
<span class="meta-label">Created:</span>
<span class="meta-value">{new Date(selectedQSO.createdAt).toLocaleString()}</span>
{/if}
</div>
</div>
</div>
</div>
{/if}
<style>
.container {
max-width: 1200px;
@@ -709,48 +923,6 @@
flex-wrap: wrap;
}
.lotw-btn {
background-color: #4a90e2;
}
.lotw-btn:hover:not(:disabled) {
background-color: #357abd;
}
.dcl-btn {
background-color: #e67e22;
}
.dcl-btn:hover:not(:disabled) {
background-color: #d35400;
}
.stats-grid {
display: grid;
grid-template-columns: repeat(auto-fit, minmax(150px, 1fr));
gap: 1rem;
margin-bottom: 2rem;
}
.stat-card {
background: white;
border: 1px solid #e0e0e0;
border-radius: 8px;
padding: 1.5rem;
text-align: center;
}
.stat-value {
font-size: 2rem;
font-weight: bold;
color: #4a90e2;
}
.stat-label {
color: #666;
margin-top: 0.5rem;
}
.filters {
background: #f8f9fa;
padding: 1rem;
@@ -759,11 +931,30 @@
}
.filters h3 {
margin: 0 0 1rem 0;
margin: 0;
font-size: 1rem;
color: #333;
}
.filters-header {
display: flex;
justify-content: space-between;
align-items: center;
margin-bottom: 1rem;
flex-wrap: wrap;
gap: 0.5rem;
}
.filter-count {
font-size: 0.875rem;
color: #666;
}
.filter-count strong {
color: #4a90e2;
font-weight: 600;
}
.filter-row {
display: flex;
gap: 1rem;
@@ -778,6 +969,21 @@
font-size: 0.9rem;
}
.search-input {
flex: 1;
min-width: 200px;
padding: 0.5rem;
border: 1px solid #ddd;
border-radius: 4px;
font-size: 0.9rem;
}
.search-input:focus {
outline: none;
border-color: #4a90e2;
box-shadow: 0 0 0 2px rgba(74, 144, 226, 0.2);
}
.checkbox-label {
display: flex;
align-items: center;
@@ -1101,4 +1307,215 @@
font-weight: 600;
color: #4a90e2;
}
/* QSO Detail Modal Styles */
.qso-row {
cursor: pointer;
transition: background-color 0.2s;
}
.qso-row:hover {
background-color: #f0f7ff !important;
}
.qso-row:focus {
outline: 2px solid #4a90e2;
outline-offset: -2px;
}
/* Modal Backdrop */
.modal-backdrop {
position: fixed;
top: 0;
left: 0;
right: 0;
bottom: 0;
background-color: rgba(0, 0, 0, 0.5);
display: flex;
align-items: center;
justify-content: center;
z-index: 1000;
padding: 1rem;
}
/* Modal Content */
.modal-content {
background: white;
border-radius: 8px;
max-width: 700px;
width: 100%;
max-height: 90vh;
overflow-y: auto;
box-shadow: 0 4px 20px rgba(0, 0, 0, 0.15);
}
/* Modal Header */
.modal-header {
display: flex;
justify-content: space-between;
align-items: center;
padding: 1.5rem;
border-bottom: 1px solid #e0e0e0;
}
.modal-header h2 {
margin: 0;
font-size: 1.5rem;
color: #333;
}
.modal-close {
background: none;
border: none;
font-size: 2rem;
line-height: 1;
cursor: pointer;
padding: 0;
width: 2rem;
height: 2rem;
display: flex;
align-items: center;
justify-content: center;
color: #666;
border-radius: 4px;
transition: all 0.2s;
}
.modal-close:hover {
background-color: #f0f0f0;
color: #333;
}
/* Modal Body */
.modal-body {
padding: 1.5rem;
}
/* Detail Sections */
.qso-detail-section {
margin-bottom: 2rem;
}
.qso-detail-section:last-child {
margin-bottom: 0;
}
.qso-detail-section h3 {
font-size: 1.1rem;
color: #4a90e2;
margin: 0 0 1rem 0;
padding-bottom: 0.5rem;
border-bottom: 2px solid #e0e0e0;
}
/* Detail Grid */
.detail-grid {
display: grid;
grid-template-columns: repeat(auto-fit, minmax(200px, 1fr));
gap: 1rem;
}
.detail-item {
display: flex;
flex-direction: column;
gap: 0.25rem;
}
.detail-label {
font-size: 0.75rem;
color: #666;
text-transform: uppercase;
font-weight: 600;
}
.detail-value {
font-size: 0.95rem;
color: #333;
font-weight: 500;
}
/* Confirmation Details */
.confirmation-details {
display: grid;
grid-template-columns: repeat(auto-fit, minmax(250px, 1fr));
gap: 2rem;
}
.confirmation-service h4 {
font-size: 1rem;
color: #333;
margin: 0 0 1rem 0;
padding-bottom: 0.5rem;
border-bottom: 1px solid #e0e0e0;
}
.confirmation-status-item {
display: flex;
justify-content: space-between;
align-items: center;
margin-bottom: 0.75rem;
}
/* Status Badges */
.status-badge {
padding: 0.25rem 0.75rem;
border-radius: 12px;
font-size: 0.75rem;
font-weight: 600;
text-transform: uppercase;
}
.status-badge.confirmed {
background-color: #4a90e2;
color: white;
}
.status-badge.not-confirmed,
.status-badge.no-data {
background-color: #e0e0e0;
color: #666;
}
.status-badge.unknown {
background-color: #fff3cd;
color: #856404;
}
/* Meta Info */
.meta-info {
display: flex;
gap: 2rem;
padding: 1rem;
background-color: #f8f9fa;
border-radius: 4px;
font-size: 0.8rem;
}
.meta-label {
color: #666;
font-weight: 600;
}
.meta-value {
color: #333;
font-family: monospace;
}
/* Scrollbar Styling for Modal */
.modal-content::-webkit-scrollbar {
width: 8px;
}
.modal-content::-webkit-scrollbar-track {
background: #f1f1f1;
}
.modal-content::-webkit-scrollbar-thumb {
background: #888;
border-radius: 4px;
}
.modal-content::-webkit-scrollbar-thumb:hover {
background: #555;
}
</style>

View File

@@ -0,0 +1,52 @@
<script>
export let stats;
</script>
{#if stats}
<div class="stats-grid">
<div class="stat-card">
<div class="stat-value">{stats.total}</div>
<div class="stat-label">Total QSOs</div>
</div>
<div class="stat-card">
<div class="stat-value">{stats.confirmed}</div>
<div class="stat-label">Confirmed</div>
</div>
<div class="stat-card">
<div class="stat-value">{stats.uniqueEntities}</div>
<div class="stat-label">DXCC Entities</div>
</div>
<div class="stat-card">
<div class="stat-value">{stats.uniqueBands}</div>
<div class="stat-label">Bands</div>
</div>
</div>
{/if}
<style>
.stats-grid {
display: grid;
grid-template-columns: repeat(auto-fit, minmax(150px, 1fr));
gap: 1rem;
margin-bottom: 2rem;
}
.stat-card {
background: white;
border: 1px solid #e0e0e0;
border-radius: 8px;
padding: 1.5rem;
text-align: center;
}
.stat-value {
font-size: 2rem;
font-weight: bold;
color: #4a90e2;
}
.stat-label {
color: #666;
margin-top: 0.5rem;
}
</style>

View File

@@ -0,0 +1,40 @@
<script>
export let service = 'lotw'; // 'lotw' or 'dcl'
export let syncStatus = null;
export let deleting = false;
export let onSync = () => {};
$: isRunning = syncStatus === 'running' || syncStatus === 'pending';
$: buttonClass = service === 'lotw' ? 'lotw-btn' : 'dcl-btn';
$: label = service === 'lotw' ? 'LoTW' : 'DCL';
</script>
<button
class="btn btn-primary {buttonClass}"
on:click={onSync}
disabled={isRunning || deleting}
>
{#if isRunning}
{label} Syncing...
{:else}
Sync from {label}
{/if}
</button>
<style>
.lotw-btn {
background-color: #4a90e2;
}
.lotw-btn:hover:not(:disabled) {
background-color: #357abd;
}
.dcl-btn {
background-color: #e67e22;
}
.dcl-btn:hover:not(:disabled) {
background-color: #d35400;
}
</style>

View File

@@ -25,14 +25,12 @@
try {
loading = true;
const response = await authAPI.getProfile();
console.log('Loaded profile:', response.user);
if (response.user) {
lotwUsername = response.user.lotwUsername || '';
lotwPassword = ''; // Never pre-fill password for security
hasLoTWCredentials = !!(response.user.lotwUsername && response.user.lotwPassword);
dclApiKey = response.user.dclApiKey || '';
hasDCLCredentials = !!response.user.dclApiKey;
console.log('Has LoTW credentials:', hasLoTWCredentials, 'Has DCL credentials:', hasDCLCredentials);
}
} catch (err) {
console.error('Failed to load profile:', err);
@@ -50,8 +48,6 @@
error = null;
successLoTW = false;
console.log('Saving LoTW credentials:', { lotwUsername, hasPassword: !!lotwPassword });
await authAPI.updateLoTWCredentials({
lotwUsername,
lotwPassword
@@ -78,8 +74,6 @@
error = null;
successDCL = false;
console.log('Saving DCL credentials:', { hasApiKey: !!dclApiKey });
await authAPI.updateDCLCredentials({
dclApiKey
});

View File

@@ -5,29 +5,42 @@ import { defineConfig } from 'vite';
function suppressURIErrorPlugin() {
return {
name: 'suppress-uri-error',
enforce: 'pre', // Run this plugin before others
configureServer(server) {
server.middlewares.use((req, res, next) => {
// Intercept malformed requests before they reach Vite's middleware
try {
// Try to decode the URL to catch malformed URIs early
if (req.url) {
decodeURI(req.url);
// Return a function that will be called after all plugins are configured
// This ensures our middleware is added at the correct time
return () => {
// Add middleware BEFORE all other middlewares
// We insert it at position 0 to ensure it runs first
server.middlewares.stack.unshift({
route: '',
handle: (req, res, next) => {
// Intercept malformed requests before they reach SvelteKit
try {
// Try to decode the URL to catch malformed URIs early
if (req.url) {
decodeURI(req.url);
// Also try the full URL construction that SvelteKit does
const base = `${server.config.server.https ? 'https' : 'http'}://${
req.headers[':authority'] || req.headers.host || 'localhost'
}`;
decodeURI(new URL(base + req.url).pathname);
}
} catch (e) {
// Silently ignore malformed URIs from browser extensions
res.writeHead(200, { 'Content-Type': 'text/plain' });
res.end('OK');
return;
}
} catch (e) {
// Silently ignore malformed URIs from browser extensions
// Don't call next(), just end the response
res.writeHead(200, { 'Content-Type': 'text/plain' });
res.end('OK');
return;
}
next();
});
next();
}});
};
}
};
}
export default defineConfig({
plugins: [sveltekit(), suppressURIErrorPlugin()],
plugins: [suppressURIErrorPlugin(), sveltekit()],
server: {
host: 'localhost',
port: 5173,