Logs and Debugging
Easy Deploy provides comprehensive logging and debugging tools to help you troubleshoot issues, monitor application behavior, and optimize performance.
Log Types
Section titled “Log Types”Application Logs
Section titled “Application Logs”Your application’s console output and custom log messages:
- Console Output: stdout and stderr from your application
- Structured Logs: JSON formatted logs with metadata
- Custom Logs: Application-specific logging
- Framework Logs: Express, Django, Rails, etc. framework logs
System Logs
Section titled “System Logs”Infrastructure and platform logs:
- Deployment Logs: Build and deployment process logs
- HTTP Access Logs: Request and response logs
- Error Logs: System errors and exceptions
- Performance Logs: Resource usage and timing data
External Service Logs
Section titled “External Service Logs”Logs from connected services:
- Database Logs: Query logs and connection events
- CDN Logs: Content delivery and caching logs
- Load Balancer Logs: Traffic distribution logs
- Security Logs: Authentication and authorization events
Accessing Logs
Section titled “Accessing Logs”Dashboard Access
Section titled “Dashboard Access”View logs in your application dashboard:
- Navigate to your application
- Go to Logs section
- Select log type and time range
- Use filters to find specific entries
CLI Access
Section titled “CLI Access”Access logs via Easy Deploy CLI:
# Install CLInpm install -g @easydeploy/cli
# Logineasydeploy login
# View real-time logseasydeploy logs APP_NAME --follow
# View logs for specific time rangeeasydeploy logs APP_NAME --since=1h
# Filter logs by leveleasydeploy logs APP_NAME --level=error
# View specific number of lineseasydeploy logs APP_NAME --tail=100
# Export logs to fileeasydeploy logs APP_NAME --since=24h > logs.txtAPI Access
Section titled “API Access”Access logs programmatically:
# Get application logscurl -H "Authorization: Bearer YOUR_API_KEY" \ "https://api.easydeploy.com/v1/applications/APP_ID/logs?since=1h&level=error"
# Stream real-time logs (WebSocket)curl -H "Authorization: Bearer YOUR_API_KEY" \ -H "Upgrade: websocket" \ "wss://api.easydeploy.com/v1/applications/APP_ID/logs/stream"
# Download log archivecurl -H "Authorization: Bearer YOUR_API_KEY" \ "https://api.easydeploy.com/v1/applications/APP_ID/logs/download?start=2024-01-01&end=2024-01-31" \ -o logs.zipStructured Logging
Section titled “Structured Logging”Log Format
Section titled “Log Format”Use structured logging for better searchability:
// Node.js with Winstonconst winston = require('winston');
const logger = winston.createLogger({ level: process.env.LOG_LEVEL || 'info', format: winston.format.combine( winston.format.timestamp(), winston.format.errors({ stack: true }), winston.format.json() ), transports: [ new winston.transports.Console() ]});
// Structured log examplelogger.info('User authentication', { event: 'user.login', userId: user.id, email: user.email, ip: req.ip, userAgent: req.get('User-Agent'), duration: loginTime, success: true});
logger.error('Database connection failed', { event: 'database.error', database: 'postgresql', host: process.env.DB_HOST, error: error.message, stack: error.stack, retryAttempt: 3});Python Logging
Section titled “Python Logging”import loggingimport jsonfrom datetime import datetime
# Configure structured loggingclass StructuredFormatter(logging.Formatter): def format(self, record): log_entry = { 'timestamp': datetime.utcnow().isoformat(), 'level': record.levelname, 'message': record.getMessage(), 'module': record.module, 'function': record.funcName, 'line': record.lineno }
if hasattr(record, 'extra_fields'): log_entry.update(record.extra_fields)
return json.dumps(log_entry)
# Setup loggerlogger = logging.getLogger(__name__)handler = logging.StreamHandler()handler.setFormatter(StructuredFormatter())logger.addHandler(handler)logger.setLevel(logging.INFO)
# Usagelogger.info('User created', extra={ 'extra_fields': { 'event': 'user.created', 'user_id': user.id, 'email': user.email, 'ip': request.remote_addr }})PHP Logging
Section titled “PHP Logging”<?phpuse Monolog\Logger;use Monolog\Handler\StreamHandler;use Monolog\Formatter\JsonFormatter;
// Create logger$logger = new Logger('app');$handler = new StreamHandler('php://stdout', Logger::INFO);$handler->setFormatter(new JsonFormatter());$logger->pushHandler($handler);
// Usage$logger->info('User authentication', [ 'event' => 'user.login', 'user_id' => $user->id, 'email' => $user->email, 'ip' => $_SERVER['REMOTE_ADDR'], 'success' => true]);?>Log Levels
Section titled “Log Levels”Standard Log Levels
Section titled “Standard Log Levels”Use appropriate log levels for different types of events:
- DEBUG: Detailed diagnostic information
- INFO: General information about application flow
- WARNING: Something unexpected happened, but application continues
- ERROR: Error occurred, but application continues
- CRITICAL: Serious error, application may stop
Log Level Examples
Section titled “Log Level Examples”// DEBUG: Detailed diagnostic informationlogger.debug('Processing user request', { requestId: req.id, method: req.method, url: req.url, params: req.params, query: req.query});
// INFO: Normal application flowlogger.info('User order completed', { event: 'order.completed', orderId: order.id, userId: user.id, amount: order.total, items: order.items.length});
// WARNING: Unexpected but recoverablelogger.warn('API rate limit approaching', { event: 'rate_limit.warning', current_requests: currentRequests, limit: rateLimit, user_id: user.id, percentage: (currentRequests / rateLimit) * 100});
// ERROR: Error occurred but app continueslogger.error('Payment processing failed', { event: 'payment.error', orderId: order.id, userId: user.id, amount: order.total, error: error.message, payment_provider: 'stripe', retry_count: retryCount});
// CRITICAL: Serious error requiring immediate attentionlogger.critical('Database connection lost', { event: 'database.critical', database: 'postgresql', host: process.env.DB_HOST, error: error.message, uptime: process.uptime(), active_connections: connectionPool.totalCount});Log Search and Filtering
Section titled “Log Search and Filtering”Search Capabilities
Section titled “Search Capabilities”Easy Deploy provides powerful log search:
- Full-text search: Search across all log messages
- Field filtering: Filter by specific fields (level, timestamp, user ID)
- Regular expressions: Use regex patterns for complex searches
- Time range filtering: Search within specific time periods
Search Examples
Section titled “Search Examples”# Search for specific errorlevel:error AND message:"payment failed"
# Search for specific user activityuserId:12345 AND event:user.*
# Search for slow requestsduration:>1000 AND event:request.completed
# Search for authentication events in last hourevent:user.login AND timestamp:>now-1h
# Search with wildcardsmessage:database* AND level:error
# Complex search with multiple conditions(level:error OR level:critical) AND (message:timeout OR message:connection)Log Analytics
Section titled “Log Analytics”Analyze patterns in your logs:
- Error trends: Track error rates over time
- Performance metrics: Analyze response times and throughput
- User behavior: Track user actions and patterns
- System health: Monitor resource usage and system events
Debugging Techniques
Section titled “Debugging Techniques”Error Tracking
Section titled “Error Tracking”Implement comprehensive error tracking:
// Global error handlerprocess.on('uncaughtException', (error) => { logger.critical('Uncaught exception', { event: 'error.uncaught_exception', error: error.message, stack: error.stack, process_id: process.pid, memory_usage: process.memoryUsage(), uptime: process.uptime() });
// Give time for log to be written setTimeout(() => process.exit(1), 1000);});
process.on('unhandledRejection', (reason, promise) => { logger.error('Unhandled promise rejection', { event: 'error.unhandled_rejection', reason: reason, promise: promise, stack: reason?.stack });});
// Express error handlerapp.use((error, req, res, next) => { const errorId = generateErrorId();
logger.error('HTTP request error', { event: 'http.error', error_id: errorId, message: error.message, stack: error.stack, request: { method: req.method, url: req.url, headers: req.headers, body: req.body, user: req.user }, response_status: error.status || 500 });
res.status(error.status || 500).json({ error: 'Internal Server Error', error_id: errorId, message: process.env.NODE_ENV === 'development' ? error.message : undefined });});Request Tracing
Section titled “Request Tracing”Track requests across your application:
// Request tracing middlewareconst { v4: uuidv4 } = require('uuid');
app.use((req, res, next) => { req.traceId = uuidv4(); req.startTime = Date.now();
logger.info('HTTP request started', { event: 'http.request.start', trace_id: req.traceId, method: req.method, url: req.url, user_agent: req.get('User-Agent'), ip: req.ip, user_id: req.user?.id });
// Override console methods to include trace ID const originalLog = console.log; console.log = (...args) => { originalLog(`[${req.traceId}]`, ...args); };
res.on('finish', () => { const duration = Date.now() - req.startTime;
logger.info('HTTP request completed', { event: 'http.request.complete', trace_id: req.traceId, method: req.method, url: req.url, status: res.statusCode, duration: duration, response_size: res.get('Content-Length') || 0 }); });
next();});Database Query Logging
Section titled “Database Query Logging”Log database operations for debugging:
// PostgreSQL query loggingconst { Pool } = require('pg');
class LoggingPool extends Pool { query(text, params, callback) { const start = Date.now(); const traceId = getCurrentTraceId();
logger.debug('Database query started', { event: 'database.query.start', trace_id: traceId, query: text, params: params });
return super.query(text, params, (err, result) => { const duration = Date.now() - start;
if (err) { logger.error('Database query failed', { event: 'database.query.error', trace_id: traceId, query: text, params: params, error: err.message, duration: duration }); } else { logger.debug('Database query completed', { event: 'database.query.complete', trace_id: traceId, query: text, duration: duration, rows_affected: result.rowCount }); }
if (callback) callback(err, result); }); }}Log Management
Section titled “Log Management”Log Retention
Section titled “Log Retention”Configure log retention policies:
logging: retention: application_logs: 30d access_logs: 90d error_logs: 180d debug_logs: 7d
compression: enabled: true algorithm: gzip
archival: enabled: true storage: s3 bucket: my-app-logsLog Rotation
Section titled “Log Rotation”Automatic log rotation to manage disk space:
- Size-based rotation: Rotate logs when they reach a certain size
- Time-based rotation: Rotate logs daily, weekly, or monthly
- Compression: Compress old log files to save space
- Cleanup: Automatically delete old log files
Log Export
Section titled “Log Export”Export logs for external analysis:
# Export logs to external systemcurl -H "Authorization: Bearer YOUR_API_KEY" \ "https://api.easydeploy.com/v1/applications/APP_ID/logs/export" \ -d '{ "destination": "s3", "bucket": "my-log-bucket", "format": "json", "compression": "gzip", "start_date": "2024-01-01", "end_date": "2024-01-31" }'
# Set up log forwardingcurl -X POST \ -H "Authorization: Bearer YOUR_API_KEY" \ -H "Content-Type: application/json" \ -d '{ "type": "webhook", "url": "https://my-log-service.com/webhook", "format": "json", "batch_size": 100, "filters": { "level": ["error", "critical"] } }' \ https://api.easydeploy.com/v1/applications/APP_ID/log-forwardingMonitoring and Alerting
Section titled “Monitoring and Alerting”Log-based Alerts
Section titled “Log-based Alerts”Set up alerts based on log patterns:
# Log alert configurationlog_alerts: - name: "High Error Rate" condition: "level:error count > 10 in 5 minutes"
- name: "Authentication Failures" condition: "event:user.login.failed count > 5 in 1 minute"
- name: "Database Errors" condition: "event:database.error count > 3 in 1 minute"
- name: "Slow Requests" condition: "duration > 5000 count > 20 in 10 minutes"Log Metrics
Section titled “Log Metrics”Create metrics from log data:
// Custom metrics from logsconst metrics = require('@easydeploy/metrics');
// Count specific eventslogger.info('User signup', { event: 'user.signup', user_id: user.id, plan: user.plan, referrer: req.get('Referrer')});
// This automatically creates metrics:// - user.signup.count (counter)// - user.signup.by_plan (grouped counter)
// Track response timeslogger.info('API request completed', { event: 'api.request.complete', endpoint: req.route.path, method: req.method, duration: responseTime, status: res.statusCode});
// This creates metrics:// - api.request.duration (histogram)// - api.request.count (counter)// - api.request.by_status (grouped counter)Troubleshooting with Logs
Section titled “Troubleshooting with Logs”Common Debugging Scenarios
Section titled “Common Debugging Scenarios”Performance Issues
Section titled “Performance Issues”# Find slow requestslevel:info AND event:http.request.complete AND duration:>1000
# Identify slow database querieslevel:debug AND event:database.query.complete AND duration:>500
# Check memory usage patternsevent:memory.usage AND heap_used:>100000000Error Investigation
Section titled “Error Investigation”# Find recent errorslevel:error AND timestamp:>now-1h
# Trace specific errortrace_id:"550e8400-e29b-41d4-a716-446655440000"
# Find related errorserror:"Connection timeout" OR error:"ECONNRESET"User Issue Debugging
Section titled “User Issue Debugging”# Track specific user activityuser_id:12345 AND timestamp:>now-24h
# Find authentication issuesevent:user.login.failed AND user_id:12345
# Track user's request flowtrace_id:"user-trace-id" ORDER BY timestampLog Analysis Best Practices
Section titled “Log Analysis Best Practices”- Use structured logging for better searchability
- Include correlation IDs to trace requests
- Log at appropriate levels to avoid noise
- Include context in log messages
- Monitor log patterns for anomalies
- Set up alerts for critical issues
- Regularly review log retention policies
Integration with External Tools
Section titled “Integration with External Tools”Log Forwarding
Section titled “Log Forwarding”Forward logs to external services:
# Splunk integrationcurl -X POST \ -H "Authorization: Bearer YOUR_API_KEY" \ -H "Content-Type: application/json" \ -d '{ "type": "splunk", "host": "splunk.company.com", "port": 8088, "token": "splunk-hec-token", "index": "easydeploy" }' \ https://api.easydeploy.com/v1/applications/APP_ID/log-forwarding
# ELK Stack integrationcurl -X POST \ -H "Authorization: Bearer YOUR_API_KEY" \ -H "Content-Type: application/json" \ -d '{ "type": "elasticsearch", "host": "elasticsearch.company.com", "port": 9200, "index": "easydeploy-logs", "username": "elastic", "password": "password" }' \ https://api.easydeploy.com/v1/applications/APP_ID/log-forwardingSIEM Integration
Section titled “SIEM Integration”Integrate with Security Information and Event Management systems:
- Splunk: Enterprise log analysis
- ELK Stack: Elasticsearch, Logstash, Kibana
- Graylog: Open source log management
- Sumo Logic: Cloud-native log analytics
Need help with logging? Check our troubleshooting guide or contact support.