Compare commits

..

1 Commits

Author SHA1 Message Date
Yu Yon
53c78e8e3c feat: 添加安全模块 + Dockerfile添加curl支持健康检查
主要更新:
- 新增 security/ 安全模块 (风险评估、威胁检测、蜜罐等)
- Dockerfile 添加 curl 以支持 Docker 健康检查
- 前端页面更新 (管理后台、用户端)
- 数据库迁移和 schema 更新
- 新增 kdocs 上传服务
- 添加安全相关测试用例

Co-Authored-By: Claude <noreply@anthropic.com>
2026-01-08 17:48:33 +08:00
247 changed files with 15162 additions and 16953 deletions

View File

@@ -13,36 +13,11 @@ FLASK_DEBUG=false
# Session配置 # Session配置
SESSION_LIFETIME_HOURS=24 SESSION_LIFETIME_HOURS=24
SESSION_COOKIE_SECURE=true # 生产环境HTTPS必须为true本地HTTP调试可临时设为false SESSION_COOKIE_SECURE=false # 使用HTTPS时设为true
HTTPS_ENABLED=true
# 是否信任 X-Forwarded-* 代理头(默认关闭,建议仅在可信反代后开启)
TRUST_PROXY_HEADERS=false
# TRUST_PROXY_HEADERS=true 时生效,按需配置你的反向代理网段
TRUSTED_PROXY_CIDRS=127.0.0.1/32,::1/128
# 可选:首次启动时指定默认管理员密码(避免控制台输出明文密码)
# DEFAULT_ADMIN_PASSWORD=your-strong-admin-password
# ==================== 数据库配置 ==================== # ==================== 数据库配置 ====================
DB_FILE=data/app_data.db DB_FILE=data/app_data.db
DB_POOL_SIZE=5 DB_POOL_SIZE=5
DB_CONNECT_TIMEOUT_SECONDS=10
DB_BUSY_TIMEOUT_MS=10000
DB_CACHE_SIZE_KB=8192
DB_WAL_AUTOCHECKPOINT_PAGES=1000
DB_MMAP_SIZE_MB=256
DB_LOCK_RETRY_COUNT=3
DB_LOCK_RETRY_BASE_MS=50
DB_SLOW_QUERY_MS=120
DB_SLOW_QUERY_SQL_MAX_LEN=240
DB_SLOW_SQL_WINDOW_SECONDS=86400
DB_SLOW_SQL_TOP_LIMIT=12
DB_SLOW_SQL_RECENT_LIMIT=50
DB_SLOW_SQL_MAX_EVENTS=20000
DB_PRAGMA_OPTIMIZE_INTERVAL_SECONDS=21600
DB_ANALYZE_INTERVAL_SECONDS=86400
DB_WAL_CHECKPOINT_INTERVAL_SECONDS=43200
DB_WAL_CHECKPOINT_MODE=PASSIVE
SYSTEM_CONFIG_CACHE_TTL_SECONDS=30
# ==================== 并发控制配置 ==================== # ==================== 并发控制配置 ====================
MAX_CONCURRENT_GLOBAL=2 MAX_CONCURRENT_GLOBAL=2

183
.gitignore vendored
View File

@@ -1,152 +1,75 @@
# Python # 浏览器二进制文件
playwright/
ms-playwright/
# 数据库文件(敏感数据)
data/*.db
data/*.db-shm
data/*.db-wal
data/*.backup*
data/secret_key.txt
data/update/
# Cookies敏感用户凭据
data/cookies/
# 日志文件
logs/
*.log
# 截图文件
截图/
# Python缓存
__pycache__/ __pycache__/
*.py[cod] *.py[cod]
*$py.class *.class
*.so *.so
.Python .Python
build/
develop-eggs/
dist/
downloads/
eggs/
.eggs/
lib/
lib64/
parts/
sdist/
var/
wheels/
*.egg-info/
.installed.cfg
*.egg
MANIFEST
# PyInstaller
*.manifest
*.spec
# Installer logs
pip-log.txt
pip-delete-this-directory.txt
# Unit test / coverage reports
htmlcov/
.tox/
.nox/
.coverage
.coverage.*
.cache
nosetests.xml
coverage.xml
*.cover
.hypothesis/
.pytest_cache/ .pytest_cache/
.ruff_cache/
# Test and tool directories .mypy_cache/
tests/ .coverage
tools/ coverage.xml
htmlcov/
# Translations
*.mo
*.pot
# Django stuff:
*.log
local_settings.py
db.sqlite3
# Flask stuff:
instance/
.webassets-cache
# Scrapy stuff:
.scrapy
# Sphinx documentation
docs/_build/
# PyBuilder
target/
# Jupyter Notebook
.ipynb_checkpoints
# IPython
profile_default/
ipython_config.py
# pyenv
.python-version
# celery beat schedule file
celerybeat-schedule
# SageMath parsed files
*.sage.py
# Environments
.env
.venv
env/ env/
venv/ venv/
ENV/ ENV/
env.bak/
venv.bak/
# Spyder project settings # 环境变量文件(包含敏感信息)
.spyderproject .env
.spyproject
# Rope project settings # Docker volumes
.ropeproject volumes/
# mkdocs documentation
/site
# mypy
.mypy_cache/
.dmypy.json
dmypy.json
# Project specific
data/
logs/
screenshots/
截图/
ruff_cache/
*.png
*.jpg
*.jpeg
*.gif
*.bmp
*.ico
*.pdf
qr_code_*.png
# Development files
test_*.py
start_*.bat
temp_*.py
kdocs_*test*.py
simple_test.py
tools/
*.sh
# IDE # IDE
.vscode/ .vscode/
.idea/ .idea/
*.swp *.swp
*.swo *.swo
*~
# OS # 系统文件
.DS_Store .DS_Store
Thumbs.db Thumbs.db
# Temporary files # 临时文件
*.tmp *.tmp
*.temp *.bak
*.backup
# Allow committed test cases # 部署脚本(含服务器信息)
!tests/ deploy_*.sh
!tests/**/*.py verify_*.sh
deploy.sh
# 内部文档
docs/
# 前端依赖(体积大,不应入库)
node_modules/
app-frontend/node_modules/
admin-frontend/node_modules/
# Local data
data/
docker-compose.yml.bak.*

297
README.md
View File

@@ -1,58 +1,31 @@
# 知识管理平台自动化工具 - Docker部署版 # 知识管理平台自动化工具 - Docker部署版
这是一个基于 Docker 的知识管理平台自动化工具支持多用户、定时任务、代理IP、VIP管理、金山文档集成等功能。 这是一个基于 Docker 的知识管理平台自动化工具支持多用户、定时任务、代理IP、VIP管理等功能。
---
## 近期更新2026-02
- Socket.IO 运行模式已切换为 `eventlet`(生产优先)。
- 管理端前端增加请求缓存/去重,降低报表页重复请求压力。
- 默认 Docker 端口映射更新为 `51232 -> 51233`
- 已清理仓库中的历史清理报告与明显冗余文件。
--- ---
## 项目简介 ## 项目简介
本项目是一个 **Docker 容器化应用**,使用 Flask + Vue 3 + Requests + wkhtmltoimage + SQLite 构建,提供: 本项目是一个 **Docker 容器化应用**,使用 Flask + Requests + wkhtmltopdf + SQLite 构建,提供:
### 核心功能 - 多用户注册登录系统
- 多用户注册登录系统(支持邮箱绑定与验证 - 自动化任务HTTP 模拟
- 自动化浏览任务(纯 HTTP API 模拟,速度快) - 定时任务调度
- 智能截图系统wkhtmltoimage支持线程池 - 截图管理
- 用户自定义定时任务(支持随机延迟) - VIP用户管理
- VIP 用户管理(账号数量限制、优先队列) - 代理IP支持
- 后台管理系统
### 集成功能
- **金山文档集成** - 自动上传截图到在线表格,支持姓名搜索匹配
- **邮件通知** - 任务完成通知、密码重置、邮箱验证
- **代理IP支持** - 动态代理API集成
### 安全功能
- 威胁检测引擎JNDI/SQL注入/XSS/命令注入检测)
- IP/用户风险评分系统
- 自动黑名单机制
- 登录设备指纹追踪
### 管理功能
- 现代化 Vue 3 SPA 后台管理界面
- 公告系统(支持图片)
- Bug 反馈系统
- 任务日志与统计
--- ---
## 技术栈 ## 技术栈
- **后端**: Python 3.10+, Flask, Flask-SocketIO - **后端**: Python 3.8+, Flask
- **前端**: Vue 3 + Vite + Element Plus (SPA) - **数据库**: SQLite
- **数据库**: SQLite + 连接池 - **自动化**: Requests + BeautifulSoup
- **自动化**: Requests + BeautifulSoup (浏览) - **截图**: wkhtmltopdf / wkhtmltoimage
- **截图**: wkhtmltoimage
- **金山文档**: Playwright (表格操作/上传)
- **容器化**: Docker + Docker Compose - **容器化**: Docker + Docker Compose
- **实时通信**: Socket.IO (WebSocket) - **前端**: HTML + JavaScript + Socket.IO
--- ---
@@ -62,46 +35,30 @@
zsglpt/ zsglpt/
├── app.py # 启动/装配入口 ├── app.py # 启动/装配入口
├── routes/ # 路由层Blueprint ├── routes/ # 路由层Blueprint
│ ├── api_*.py # API 路由
│ ├── admin_api/ # 管理后台 API
│ └── pages.py # 页面路由
├── services/ # 业务服务层 ├── services/ # 业务服务层
│ ├── tasks.py # 任务调度器
│ ├── screenshots.py # 截图服务
│ ├── kdocs_uploader.py # 金山文档上传服务
│ └── schedule_*.py # 定时任务相关
├── security/ # 安全防护模块
│ ├── threat_detector.py # 威胁检测引擎
│ ├── risk_scorer.py # 风险评分
│ ├── blacklist.py # 黑名单管理
│ └── middleware.py # 安全中间件
├── realtime/ # SocketIO 事件与推送 ├── realtime/ # SocketIO 事件与推送
├── database.py # 数据库稳定门面(对外 API ├── database.py # 数据库稳定门面(对外 API
├── db/ # DB 分域实现 + schema/migrations ├── db/ # DB 分域实现 + schema/migrations
├── db_pool.py # 数据库连接池 ├── db_pool.py # 数据库连接池
├── api_browser.py # Requests 自动化(主浏览流程) ├── api_browser.py # Requests 自动化(主浏览流程)
├── browser_pool_worker.py # wkhtmltoimage 截图线程池 ├── browser_pool_worker.py # 截图 WorkerPool
├── app_config.py # 配置管理 ├── app_config.py # 配置管理
├── app_logger.py # 日志系统 ├── app_logger.py # 日志系统
├── app_security.py # 安全工具函数 ├── app_security.py # 安全模块
├── password_utils.py # 密码哈希工具 ├── password_utils.py # 密码工具
├── crypto_utils.py # 加解密工具 ├── crypto_utils.py # 加解密工具
├── email_service.py # 邮件服务SMTP ├── email_service.py # 邮件服务
├── requirements.txt # Python依赖 ├── requirements.txt # Python依赖
├── requirements-dev.txt # 开发依赖(不进生产镜像) ├── requirements-dev.txt # 开发依赖(不进生产镜像)
├── pyproject.toml # ruff/pytest 配置 ├── pyproject.toml # ruff/black/pytest 配置
├── Dockerfile # Docker镜像构建文件 ├── Dockerfile # Docker镜像构建文件
├── docker-compose.yml # Docker编排文件 ├── docker-compose.yml # Docker编排文件
├── templates/ # HTML模板SPA 入口 ├── templates/ # HTML模板SPA fallback
├── app.html # 用户端 SPA 入口 ├── app-frontend/ # 用户端前端源码(可选保留)
├── admin.html # 管理端 SPA 入口 ├── admin-frontend/ # 后台前端源码(可选保留)
│ └── email/ # 邮件模板 └── static/ # 前端构建产物(运行时使用)
├── app-frontend/ # 用户端 Vue 源码 ├── app/ # 用户端 SPA
── admin-frontend/ # 管理端 Vue 源码 ── admin/ # 后台 SPA
├── static/ # 前端构建产物
│ ├── app/ # 用户端 SPA 资源
│ └── admin/ # 管理端 SPA 资源
└── scripts/ # 维护脚本(例如健康监控)
``` ```
--- ---
@@ -134,56 +91,20 @@ ssh -i /path/to/key root@your-server-ip
--- ---
### 3. 配置加密密钥(重要!)
系统使用 Fernet 对称加密保护用户账号密码。**首次部署或迁移时必须正确配置加密密钥!**
#### 方式一:使用 .env 文件(推荐)
在项目根目录创建 `.env` 文件:
```bash
cd /www/wwwroot/zsglpt
# 生成随机密钥
python3 -c "from cryptography.fernet import Fernet; print(f'ENCRYPTION_KEY_RAW={Fernet.generate_key().decode()}')" > .env
# 设置权限(仅 root 可读)
chmod 600 .env
```
#### 方式二:已有密钥迁移
如果从其他服务器迁移,需要复制原有的密钥:
```bash
# 从旧服务器复制 .env 文件
scp root@old-server:/www/wwwroot/zsglpt/.env /www/wwwroot/zsglpt/
```
#### ⚠️ 重要警告
- **密钥丢失 = 所有加密密码无法解密**,必须重新录入所有账号密码
- `.env` 文件已在 `.gitignore` 中,不会被提交到 Git
- 建议将密钥备份到安全的地方(如密码管理器)
- 系统启动时会检测密钥,如果密钥丢失但存在加密数据,将拒绝启动并报错
---
## 快速部署 ## 快速部署
### 步骤1: 上传项目文件 ### 步骤1: 上传项目文件
将整个 `zsglpt` 文件夹上传到服务器的 `/www/wwwroot/` 目录: 将整个 `zsgpt2` 文件夹上传到服务器的 `/www/wwwroot/` 目录:
```bash ```bash
# 在本地执行Windows PowerShell 或 Git Bash # 在本地执行Windows PowerShell 或 Git Bash
scp -r C:\Users\Administrator\Desktop\zsglpt root@your-server-ip:/www/wwwroot/ scp -r C:\Users\Administrator\Desktop\zsgpt2 root@your-server-ip:/www/wwwroot/
# 或者使用 FileZilla、WinSCP 等工具上传 # 或者使用 FileZilla、WinSCP 等工具上传
``` ```
上传后,服务器上的路径应该是:`/www/wwwroot/zsglpt/` 上传后,服务器上的路径应该是:`/www/wwwroot/zsgpt2/`
### 步骤2: SSH登录服务器 ### 步骤2: SSH登录服务器
@@ -194,19 +115,16 @@ ssh root@your-server-ip
### 步骤3: 进入项目目录 ### 步骤3: 进入项目目录
```bash ```bash
cd /www/wwwroot/zsglpt cd /www/wwwroot/zsgpt2
``` ```
### 步骤4: 创建必要的目录 ### 步骤4: 创建必要的目录
```bash ```bash
mkdir -p data logs 截图 mkdir -p data logs 截图
chown -R 1000:1000 data logs 截图 chmod 777 data logs 截图
chmod 750 data logs 截图
``` ```
> 说明:避免使用 `chmod 777`。如容器内运行用户不是 `1000:1000`,请改为实际 UID/GID。
### 步骤5: 构建并启动Docker容器 ### 步骤5: 构建并启动Docker容器
```bash ```bash
@@ -214,7 +132,7 @@ chmod 750 data logs 截图
docker build -t knowledge-automation . docker build -t knowledge-automation .
# 启动容器 # 启动容器
docker compose up -d docker-compose up -d
# 查看容器状态 # 查看容器状态
docker ps | grep knowledge-automation docker ps | grep knowledge-automation
@@ -229,8 +147,8 @@ docker logs -f knowledge-automation-multiuser
如果看到以下信息,说明启动成功: 如果看到以下信息,说明启动成功:
``` ```
服务器启动中... 服务器启动中...
用户访问地址: http://0.0.0.0:51233 用户访问地址: http://0.0.0.0:5000
后台管理地址: http://0.0.0.0:51233/yuyx 后台管理地址: http://0.0.0.0:5000/yuyx
``` ```
--- ---
@@ -264,7 +182,7 @@ server {
# 反向代理 # 反向代理
location / { location / {
proxy_pass http://127.0.0.1:51232; proxy_pass http://127.0.0.1:5001;
proxy_set_header Host $host; proxy_set_header Host $host;
proxy_set_header X-Real-IP $remote_addr; proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for; proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
@@ -306,15 +224,15 @@ certbot renew --dry-run
### 用户端 ### 用户端
- **HTTP**: `http://your-server-ip:51232` - **HTTP**: `http://your-server-ip:5001`
- **域名**: `http://your-domain.com` (配置Nginx后) - **域名**: `http://your-domain.com` (配置Nginx后)
- **HTTPS**: `https://your-domain.com` (配置SSL后) - **HTTPS**: `https://your-domain.com` (配置SSL后)
### 后台管理 ### 后台管理
- **后台地址**: `/yuyx` - **路径**: `/yuyx`
- **管理员账号**: 以数据库现有账号为准(首次运行默认创建 `admin` - **默认账号**: `admin`
- **管理员密码**: 首次运行随机生成,请查看容器启动日志 - **默认密码**: `admin`
**首次登录后请立即修改密码!** **首次登录后请立即修改密码!**
@@ -372,7 +290,7 @@ docker logs -f knowledge-automation-multiuser
docker logs --tail 100 knowledge-automation-multiuser docker logs --tail 100 knowledge-automation-multiuser
# 查看应用日志文件 # 查看应用日志文件
tail -f /www/wwwroot/zsglpt/logs/app.log tail -f /www/wwwroot/zsgpt2/logs/app.log
``` ```
### 进入容器 ### 进入容器
@@ -390,14 +308,14 @@ docker exec knowledge-automation-multiuser python -c "print('Hello')"
如果修改了代码,需要重新构建: 如果修改了代码,需要重新构建:
```bash ```bash
cd /www/wwwroot/zsglpt cd /www/wwwroot/zsgpt2
# 停止并删除旧容器 # 停止并删除旧容器
docker compose down docker-compose down
# 重新构建并启动 # 重新构建并启动
docker compose build docker-compose build
docker compose up -d docker-compose up -d
``` ```
--- ---
@@ -410,13 +328,13 @@ docker compose up -d
cd /www/wwwroot cd /www/wwwroot
# 备份整个项目 # 备份整个项目
tar -czf zsglpt_backup_$(date +%Y%m%d).tar.gz zsglpt/ tar -czf zsgpt2_backup_$(date +%Y%m%d).tar.gz zsgpt2/
# 仅备份数据库 # 仅备份数据库
cp /www/wwwroot/zsglpt/data/app_data.db /backup/app_data_$(date +%Y%m%d).db cp /www/wwwroot/zsgpt2/data/app_data.db /backup/app_data_$(date +%Y%m%d).db
# 备份截图 # 备份截图
tar -czf screenshots_$(date +%Y%m%d).tar.gz /www/wwwroot/zsglpt/截图/ tar -czf screenshots_$(date +%Y%m%d).tar.gz /www/wwwroot/zsgpt2/截图/
``` ```
### 2. 恢复数据 ### 2. 恢复数据
@@ -427,10 +345,10 @@ docker stop knowledge-automation-multiuser
# 恢复整个项目 # 恢复整个项目
cd /www/wwwroot cd /www/wwwroot
tar -xzf zsglpt_backup_20251027.tar.gz tar -xzf zsgpt2_backup_20251027.tar.gz
# 恢复数据库 # 恢复数据库
cp /backup/app_data_20251027.db /www/wwwroot/zsglpt/data/app_data.db cp /backup/app_data_20251027.db /www/wwwroot/zsgpt2/data/app_data.db
# 重启容器 # 重启容器
docker start knowledge-automation-multiuser docker start knowledge-automation-multiuser
@@ -448,7 +366,7 @@ crontab -e
```bash ```bash
# 每天凌晨3点备份 # 每天凌晨3点备份
0 3 * * * tar -czf /backup/zsglpt_$(date +\%Y\%m\%d).tar.gz /www/wwwroot/zsglpt/data 0 3 * * * tar -czf /backup/zsgpt2_$(date +\%Y\%m\%d).tar.gz /www/wwwroot/zsgpt2/data
``` ```
--- ---
@@ -457,19 +375,19 @@ crontab -e
### 1. 容器启动失败 ### 1. 容器启动失败
**问题**: `docker compose up -d` 失败 **问题**: `docker-compose up -d` 失败
**解决方案**: **解决方案**:
```bash ```bash
# 查看详细错误 # 查看详细错误
docker compose logs docker-compose logs
# 检查端口占用 # 检查端口占用
netstat -tlnp | grep 51232 netstat -tlnp | grep 5001
# 重新构建 # 重新构建
docker compose build --no-cache docker-compose build --no-cache
docker compose up -d docker-compose up -d
``` ```
### 2. 502 Bad Gateway ### 2. 502 Bad Gateway
@@ -482,10 +400,10 @@ docker compose up -d
docker ps | grep knowledge-automation docker ps | grep knowledge-automation
# 检查端口是否监听 # 检查端口是否监听
netstat -tlnp | grep 51232 netstat -tlnp | grep 5001
# 测试直接访问 # 测试直接访问
curl http://127.0.0.1:51232 curl http://127.0.0.1:5001
# 检查Nginx配置 # 检查Nginx配置
nginx -t nginx -t
@@ -501,7 +419,7 @@ nginx -t
docker restart knowledge-automation-multiuser docker restart knowledge-automation-multiuser
# 如果问题持续,优化数据库 # 如果问题持续,优化数据库
cd /www/wwwroot/zsglpt cd /www/wwwroot/zsgpt2
cp data/app_data.db data/app_data.db.backup cp data/app_data.db data/app_data.db.backup
sqlite3 data/app_data.db "VACUUM;" sqlite3 data/app_data.db "VACUUM;"
``` ```
@@ -524,8 +442,8 @@ services:
然后重启: 然后重启:
```bash ```bash
docker compose down docker-compose down
docker compose up -d docker-compose up -d
``` ```
### 5. 截图工具未安装 ### 5. 截图工具未安装
@@ -567,13 +485,13 @@ wkhtmltoimage --version
```bash ```bash
# 清理7天前的截图 # 清理7天前的截图
find /www/wwwroot/zsglpt/截图 -name "*.jpg" -mtime +7 -delete find /www/wwwroot/zsgpt2/截图 -name "*.jpg" -mtime +7 -delete
# 清理旧日志 # 清理旧日志
find /www/wwwroot/zsglpt/logs -name "*.log" -mtime +30 -delete find /www/wwwroot/zsgpt2/logs -name "*.log" -mtime +30 -delete
# 优化数据库 # 优化数据库
sqlite3 /www/wwwroot/zsglpt/data/app_data.db "VACUUM;" sqlite3 /www/wwwroot/zsgpt2/data/app_data.db "VACUUM;"
``` ```
--- ---
@@ -594,9 +512,9 @@ firewall-cmd --permanent --add-port=80/tcp
firewall-cmd --permanent --add-port=443/tcp firewall-cmd --permanent --add-port=443/tcp
firewall-cmd --reload firewall-cmd --reload
# 禁止直接访问51232端口仅Nginx可访问 # 禁止直接访问5001端口仅Nginx可访问
iptables -A INPUT -p tcp --dport 51232 -s 127.0.0.1 -j ACCEPT iptables -A INPUT -p tcp --dport 5001 -s 127.0.0.1 -j ACCEPT
iptables -A INPUT -p tcp --dport 51232 -j DROP iptables -A INPUT -p tcp --dport 5001 -j DROP
``` ```
### 3. 启用HTTPS ### 3. 启用HTTPS
@@ -637,13 +555,13 @@ systemctl restart sshd
```bash ```bash
# 统计今日任务数 # 统计今日任务数
grep "浏览完成" /www/wwwroot/zsglpt/logs/app.log | grep $(date +%Y-%m-%d) | wc -l grep "浏览完成" /www/wwwroot/zsgpt2/logs/app.log | grep $(date +%Y-%m-%d) | wc -l
# 查看错误日志 # 查看错误日志
grep "ERROR" /www/wwwroot/zsglpt/logs/app.log | tail -20 grep "ERROR" /www/wwwroot/zsgpt2/logs/app.log | tail -20
# 查看最近的登录 # 查看最近的登录
grep "登录成功" /www/wwwroot/zsglpt/logs/app.log | tail -10 grep "登录成功" /www/wwwroot/zsgpt2/logs/app.log | tail -10
``` ```
### 3. 数据库维护 ### 3. 数据库维护
@@ -667,7 +585,7 @@ EOF
```bash ```bash
# 停止容器 # 停止容器
docker compose down docker-compose down
# 备份数据 # 备份数据
cp -r data data.backup cp -r data data.backup
@@ -677,8 +595,8 @@ cp -r 截图 截图.backup
# 使用 scp 或 FTP 工具上传 # 使用 scp 或 FTP 工具上传
# 重新构建并启动 # 重新构建并启动
docker compose build docker-compose build
docker compose up -d docker-compose up -d
``` ```
### 2. 数据库迁移 ### 2. 数据库迁移
@@ -697,8 +615,8 @@ docker logs knowledge-automation-multiuser | grep "数据库"
| 端口 | 说明 | 映射 | | 端口 | 说明 | 映射 |
|------|------|------| |------|------|------|
| 51233 | 容器内应用端口 | - | | 5000 | 容器内应用端口 | - |
| 51232 | 主机映射端口 | 容器51233 → 主机51232 | | 5001 | 主机映射端口 | 容器5000 → 主机5001 |
| 80 | HTTP端口 | Nginx | | 80 | HTTP端口 | Nginx |
| 443 | HTTPS端口 | Nginx | | 443 | HTTPS端口 | Nginx |
@@ -710,8 +628,6 @@ docker logs knowledge-automation-multiuser | grep "数据库"
| 变量名 | 说明 | 默认值 | | 变量名 | 说明 | 默认值 |
|--------|------|--------| |--------|------|--------|
| ENCRYPTION_KEY_RAW | 加密密钥Fernet格式优先级最高 | 从 .env 文件读取 |
| ENCRYPTION_KEY | 加密密钥会通过PBKDF2派生 | - |
| TZ | 时区 | Asia/Shanghai | | TZ | 时区 | Asia/Shanghai |
| PYTHONUNBUFFERED | Python输出缓冲 | 1 | | PYTHONUNBUFFERED | Python输出缓冲 | 1 |
| WKHTMLTOIMAGE_PATH | wkhtmltoimage 可执行文件路径 | 自动探测 | | WKHTMLTOIMAGE_PATH | wkhtmltoimage 可执行文件路径 | 自动探测 |
@@ -749,7 +665,7 @@ docker logs knowledge-automation-multiuser | grep "数据库"
遇到问题时,请按以下顺序检查: 遇到问题时,请按以下顺序检查:
1. **容器日志**: `docker logs knowledge-automation-multiuser` 1. **容器日志**: `docker logs knowledge-automation-multiuser`
2. **应用日志**: `cat /www/wwwroot/zsglpt/logs/app.log` 2. **应用日志**: `cat /www/wwwroot/zsgpt2/logs/app.log`
3. **Nginx日志**: `cat /var/log/nginx/zsgpt_error.log` 3. **Nginx日志**: `cat /var/log/nginx/zsgpt_error.log`
4. **系统资源**: `docker stats`, `htop`, `df -h` 4. **系统资源**: `docker stats`, `htop`, `df -h`
@@ -761,9 +677,9 @@ docker logs knowledge-automation-multiuser | grep "数据库"
--- ---
**文档版本**: v2.1 **文档版本**: v1.0
**更新日期**: 2026-02-07 **更新日期**: 2025-10-29
**适用版本**: Docker多用户版 + Vue SPA **适用版本**: Docker多用户版
--- ---
@@ -771,73 +687,26 @@ docker logs knowledge-automation-multiuser | grep "数据库"
```bash ```bash
# 1. 上传文件 # 1. 上传文件
scp -r zsglpt root@your-ip:/www/wwwroot/ scp -r zsgpt2 root@your-ip:/www/wwwroot/
# 2. SSH登录 # 2. SSH登录
ssh root@your-ip ssh root@your-ip
# 3. 进入目录并创建必要目录 # 3. 进入目录并创建必要目录
cd /www/wwwroot/zsglpt cd /www/wwwroot/zsgpt2
mkdir -p data logs 截图 mkdir -p data logs 截图
chmod 777 data logs 截图 chmod 777 data logs 截图
# 4. 启动容器 # 4. 启动容器
docker compose up -d docker-compose up -d
# 5. 查看日志 # 5. 查看日志
docker logs -f knowledge-automation-multiuser docker logs -f knowledge-automation-multiuser
# 6. 访问系统 # 6. 访问系统
# 浏览器打开: http://your-ip:51232 # 浏览器打开: http://your-ip:5001
# 后台管理: http://your-ip:51232/yuyx # 后台管理: http://your-ip:5001/yuyx
# 首次管理员密码会写入 data/default_admin_credentials.txt权限600 # 默认账号: admin / admin
# 登录后请立即修改密码并删除该文件
``` ```
完成!🎉 完成!🎉
---
## 更新日志
### v2.0 (2026-01-08)
#### 新功能
- **金山文档集成**: 自动上传截图到金山文档表格
- 支持姓名搜索匹配单元格
- 支持配置有效行范围
- 支持覆盖已有图片
- 离线状态监控与邮件通知
- **Vue 3 SPA 前端**: 用户端和管理端全面升级为现代化单页应用
- Element Plus UI 组件库
- 实时任务状态更新
- 响应式设计
- **用户自定义定时任务**: 用户可创建自己的定时任务
- 支持多时间段配置
- 支持随机延迟
- 支持选择指定账号
- **安全防护系统**:
- 威胁检测引擎JNDI/SQL注入/XSS/命令注入)
- IP/用户风险评分
- 自动黑名单机制
- **邮件通知系统**:
- 任务完成通知
- 密码重置邮件
- 邮箱验证
- **公告系统**: 支持图片的系统公告
- **Bug反馈系统**: 用户可提交问题反馈
#### 优化
- **截图线程池**: wkhtmltoimage 截图支持多线程并发
- 线程池管理,按需启动
- 空闲自动释放资源
- **二次登录机制**: 刷新"上次登录时间"显示
- **API 预热**: 启动时预热连接,减少首次请求延迟
- **数据库连接池**: 提高并发性能
### v1.0 (2025-10-29)
- 初始版本
- 多用户系统
- 基础自动化任务
- 定时任务调度
- 代理IP支持

5
admin-frontend/README.md Normal file
View File

@@ -0,0 +1,5 @@
# Vue 3 + Vite
This template should help get you started developing with Vue 3 in Vite. The template uses Vue 3 `<script setup>` SFCs, check out the [script setup docs](https://v3.vuejs.org/api/sfc-script-setup.html#sfc-script-setup) to learn more.
Learn more about IDE Support for Vue in the [Vue Docs Scaling up Guide](https://vuejs.org/guide/scaling-up/tooling.html#ide-support).

View File

@@ -5,13 +5,8 @@ export async function updateAdminUsername(newUsername) {
return data return data
} }
export async function updateAdminPassword(payload = {}) { export async function updateAdminPassword(newPassword) {
const currentPassword = String(payload.currentPassword || '') const { data } = await api.put('/admin/password', { new_password: newPassword })
const newPassword = String(payload.newPassword || '')
const { data } = await api.put('/admin/password', {
current_password: currentPassword,
new_password: newPassword,
})
return data return data
} }
@@ -20,27 +15,3 @@ export async function logout() {
return data return data
} }
export async function fetchAdminPasskeys() {
const { data } = await api.get('/admin/passkeys')
return data
}
export async function createAdminPasskeyOptions(payload = {}) {
const { data } = await api.post('/admin/passkeys/register/options', payload)
return data
}
export async function createAdminPasskeyVerify(payload = {}) {
const { data } = await api.post('/admin/passkeys/register/verify', payload)
return data
}
export async function deleteAdminPasskey(passkeyId) {
const { data } = await api.delete(`/admin/passkeys/${passkeyId}`)
return data
}
export async function reportAdminPasskeyClientError(payload = {}) {
const { data } = await api.post('/admin/passkeys/client-error', payload)
return data
}

View File

@@ -1,11 +1,7 @@
import { api } from './client' import { api } from './client'
import { createCachedGetter } from './cache'
const browserPoolStatsGetter = createCachedGetter(async () => { export async function fetchBrowserPoolStats() {
const { data } = await api.get('/browser_pool/stats') const { data } = await api.get('/browser_pool/stats')
return data return data
}, 4_000)
export async function fetchBrowserPoolStats(options = {}) {
return browserPoolStatsGetter.run(options)
} }

View File

@@ -1,46 +0,0 @@
export function createCachedGetter(fetcher, ttlMs = 0) {
let hasValue = false
let cachedValue = null
let expiresAt = 0
let inflight = null
async function run(options = {}) {
const force = Boolean(options?.force)
const now = Date.now()
if (!force && hasValue && now < expiresAt) {
return cachedValue
}
if (!force && inflight) {
return inflight
}
inflight = Promise.resolve()
.then(() => fetcher())
.then((data) => {
cachedValue = data
hasValue = true
const ttl = Math.max(0, Number(ttlMs) || 0)
expiresAt = Date.now() + ttl
return data
})
.finally(() => {
inflight = null
})
return inflight
}
function clear() {
hasValue = false
cachedValue = null
expiresAt = 0
inflight = null
}
return {
run,
clear,
}
}

View File

@@ -4,10 +4,6 @@ import { ElMessage, ElMessageBox } from 'element-plus'
let lastToastKey = '' let lastToastKey = ''
let lastToastAt = 0 let lastToastAt = 0
const RETRYABLE_STATUS = new Set([408, 425, 429, 500, 502, 503, 504])
const MAX_RETRY_COUNT = 1
const RETRY_BASE_DELAY_MS = 300
function toastErrorOnce(key, message, minIntervalMs = 1500) { function toastErrorOnce(key, message, minIntervalMs = 1500) {
const now = Date.now() const now = Date.now()
if (key === lastToastKey && now - lastToastAt < minIntervalMs) return if (key === lastToastKey && now - lastToastAt < minIntervalMs) return
@@ -22,41 +18,6 @@ function getCookie(name) {
return match ? decodeURIComponent(match[1]) : '' return match ? decodeURIComponent(match[1]) : ''
} }
function isIdempotentMethod(method) {
return ['GET', 'HEAD', 'OPTIONS'].includes(String(method || 'GET').toUpperCase())
}
function shouldRetryRequest(error) {
const config = error?.config
if (!config || config.__no_retry) return false
if (!isIdempotentMethod(config.method)) return false
const retried = Number(config.__retry_count || 0)
if (retried >= MAX_RETRY_COUNT) return false
const code = String(error?.code || '')
if (code === 'ECONNABORTED' || code === 'ERR_NETWORK') return true
const status = Number(error?.response?.status || 0)
return RETRYABLE_STATUS.has(status)
}
function delay(ms) {
return new Promise((resolve) => {
window.setTimeout(resolve, Math.max(0, Number(ms || 0)))
})
}
async function retryRequestOnce(error, client) {
const config = error?.config || {}
const retried = Number(config.__retry_count || 0)
config.__retry_count = retried + 1
const backoffMs = RETRY_BASE_DELAY_MS * (retried + 1)
await delay(backoffMs)
return client.request(config)
}
export const api = axios.create({ export const api = axios.create({
baseURL: '/yuyx/api', baseURL: '/yuyx/api',
timeout: 30_000, timeout: 30_000,
@@ -104,7 +65,6 @@ api.interceptors.response.use(
const status = error?.response?.status const status = error?.response?.status
const payload = error?.response?.data const payload = error?.response?.data
const message = payload?.error || payload?.message || error?.message || '请求失败' const message = payload?.error || payload?.message || error?.message || '请求失败'
const silent = Boolean(error?.config?.__silent)
if (payload?.code === 'reauth_required' && error?.config && !error.config.__reauth_retry) { if (payload?.code === 'reauth_required' && error?.config && !error.config.__reauth_retry) {
try { try {
@@ -116,32 +76,18 @@ api.interceptors.response.use(
} }
} }
if (shouldRetryRequest(error)) {
return retryRequestOnce(error, api)
}
if (status === 401) { if (status === 401) {
if (!silent) { toastErrorOnce('401', message || '登录已过期,请重新登录', 3000)
toastErrorOnce('401', message || '登录已过期,请重新登录', 3000)
}
const pathname = window.location?.pathname || '' const pathname = window.location?.pathname || ''
if (!pathname.startsWith('/yuyx')) window.location.href = '/yuyx' if (!pathname.startsWith('/yuyx')) window.location.href = '/yuyx'
} else if (status === 403) { } else if (status === 403) {
if (!silent) { toastErrorOnce('403', message || '需要管理员权限', 5000)
toastErrorOnce('403', message || '需要管理员权限', 5000)
}
} else if (status) { } else if (status) {
if (!silent) { toastErrorOnce(`http:${status}:${message}`, message)
toastErrorOnce(`http:${status}:${message}`, message)
}
} else if (error?.code === 'ECONNABORTED') { } else if (error?.code === 'ECONNABORTED') {
if (!silent) { toastErrorOnce('timeout', '请求超时', 3000)
toastErrorOnce('timeout', '请求超时', 3000)
}
} else { } else {
if (!silent) { toastErrorOnce(`net:${message}`, message, 3000)
toastErrorOnce(`net:${message}`, message, 3000)
}
} }
return Promise.reject(error) return Promise.reject(error)

View File

@@ -1,10 +1,4 @@
import { api } from './client' import { api } from './client'
import { createCachedGetter } from './cache'
const emailStatsGetter = createCachedGetter(async () => {
const { data } = await api.get('/email/stats')
return data
}, 10_000)
export async function fetchEmailSettings() { export async function fetchEmailSettings() {
const { data } = await api.get('/email/settings') const { data } = await api.get('/email/settings')
@@ -13,12 +7,12 @@ export async function fetchEmailSettings() {
export async function updateEmailSettings(payload) { export async function updateEmailSettings(payload) {
const { data } = await api.post('/email/settings', payload) const { data } = await api.post('/email/settings', payload)
emailStatsGetter.clear()
return data return data
} }
export async function fetchEmailStats(options = {}) { export async function fetchEmailStats() {
return emailStatsGetter.run(options) const { data } = await api.get('/email/stats')
return data
} }
export async function fetchEmailLogs(params) { export async function fetchEmailLogs(params) {
@@ -28,6 +22,6 @@ export async function fetchEmailLogs(params) {
export async function cleanupEmailLogs(days) { export async function cleanupEmailLogs(days) {
const { data } = await api.post('/email/logs/cleanup', { days }) const { data } = await api.post('/email/logs/cleanup', { days })
emailStatsGetter.clear()
return data return data
} }

View File

@@ -1,40 +1,26 @@
import { api } from './client' import { api } from './client'
import { createCachedGetter } from './cache'
const FEEDBACK_STATS_TTL_MS = 10_000
const feedbackStatsGetter = createCachedGetter(async () => {
const { data } = await api.get('/feedbacks', { params: { limit: 1, offset: 0 } })
return data?.stats
}, FEEDBACK_STATS_TTL_MS)
export async function fetchFeedbacks(status = '') { export async function fetchFeedbacks(status = '') {
const { data } = await api.get('/feedbacks', { params: status ? { status } : {} }) const { data } = await api.get('/feedbacks', { params: status ? { status } : {} })
return data return data
} }
export async function fetchFeedbackStats(options = {}) { export async function fetchFeedbackStats() {
return feedbackStatsGetter.run(options) const { data } = await api.get('/feedbacks', { params: { limit: 1, offset: 0 } })
} return data?.stats
export function clearFeedbackStatsCache() {
feedbackStatsGetter.clear()
} }
export async function replyFeedback(feedbackId, reply) { export async function replyFeedback(feedbackId, reply) {
const { data } = await api.post(`/feedbacks/${feedbackId}/reply`, { reply }) const { data } = await api.post(`/feedbacks/${feedbackId}/reply`, { reply })
clearFeedbackStatsCache()
return data return data
} }
export async function closeFeedback(feedbackId) { export async function closeFeedback(feedbackId) {
const { data } = await api.post(`/feedbacks/${feedbackId}/close`) const { data } = await api.post(`/feedbacks/${feedbackId}/close`)
clearFeedbackStatsCache()
return data return data
} }
export async function deleteFeedback(feedbackId) { export async function deleteFeedback(feedbackId) {
const { data } = await api.delete(`/feedbacks/${feedbackId}`) const { data } = await api.delete(`/feedbacks/${feedbackId}`)
clearFeedbackStatsCache()
return data return data
} }

View File

@@ -1,7 +1,7 @@
import { api } from './client' import { api } from './client'
export async function fetchKdocsStatus(params = {}, requestConfig = {}) { export async function fetchKdocsStatus(params = {}) {
const { data } = await api.get('/kdocs/status', { params, ...requestConfig }) const { data } = await api.get('/kdocs/status', { params })
return data return data
} }

View File

@@ -0,0 +1,17 @@
import { api } from './client'
export async function fetchPasswordResets() {
const { data } = await api.get('/password_resets')
return data
}
export async function approvePasswordReset(requestId) {
const { data } = await api.post(`/password_resets/${requestId}/approve`)
return data
}
export async function rejectPasswordReset(requestId) {
const { data } = await api.post(`/password_resets/${requestId}/reject`)
return data
}

View File

@@ -1,17 +1,7 @@
import { api } from './client' import { api } from './client'
import { createCachedGetter } from './cache'
const SYSTEM_STATS_TTL_MS = 15_000 export async function fetchSystemStats() {
const systemStatsGetter = createCachedGetter(async () => {
const { data } = await api.get('/stats') const { data } = await api.get('/stats')
return data return data
}, SYSTEM_STATS_TTL_MS)
export async function fetchSystemStats(options = {}) {
return systemStatsGetter.run(options)
} }
export function clearSystemStatsCache() {
systemStatsGetter.clear()
}

View File

@@ -1,18 +1,12 @@
import { api } from './client' import { api } from './client'
import { createCachedGetter } from './cache'
const systemConfigGetter = createCachedGetter(async () => { export async function fetchSystemConfig() {
const { data } = await api.get('/system/config') const { data } = await api.get('/system/config')
return data return data
}, 15_000)
export async function fetchSystemConfig(options = {}) {
return systemConfigGetter.run(options)
} }
export async function updateSystemConfig(payload) { export async function updateSystemConfig(payload) {
const { data } = await api.post('/system/config', payload) const { data } = await api.post('/system/config', payload)
systemConfigGetter.clear()
return data return data
} }
@@ -20,3 +14,4 @@ export async function executeScheduleNow() {
const { data } = await api.post('/schedule/execute', {}) const { data } = await api.post('/schedule/execute', {})
return data return data
} }

View File

@@ -1,58 +1,23 @@
import { api } from './client' import { api } from './client'
import { createCachedGetter } from './cache'
const serverInfoGetter = createCachedGetter(async () => { export async function fetchServerInfo() {
const { data } = await api.get('/server/info') const { data } = await api.get('/server/info')
return data return data
}, 30_000) }
const dockerStatsGetter = createCachedGetter(async () => { export async function fetchDockerStats() {
const { data } = await api.get('/docker_stats') const { data } = await api.get('/docker_stats')
return data return data
}, 8_000) }
const requestMetricsGetter = createCachedGetter(async () => { export async function fetchTaskStats() {
const { data } = await api.get('/request_metrics')
return data
}, 10_000)
const slowSqlMetricsGetter = createCachedGetter(async () => {
const { data } = await api.get('/slow_sql_metrics')
return data
}, 10_000)
const taskStatsGetter = createCachedGetter(async () => {
const { data } = await api.get('/task/stats') const { data } = await api.get('/task/stats')
return data return data
}, 4_000) }
const runningTasksGetter = createCachedGetter(async () => { export async function fetchRunningTasks() {
const { data } = await api.get('/task/running') const { data } = await api.get('/task/running')
return data return data
}, 2_000)
export async function fetchServerInfo(options = {}) {
return serverInfoGetter.run(options)
}
export async function fetchDockerStats(options = {}) {
return dockerStatsGetter.run(options)
}
export async function fetchRequestMetrics(options = {}) {
return requestMetricsGetter.run(options)
}
export async function fetchSlowSqlMetrics(options = {}) {
return slowSqlMetricsGetter.run(options)
}
export async function fetchTaskStats(options = {}) {
return taskStatsGetter.run(options)
}
export async function fetchRunningTasks(options = {}) {
return runningTasksGetter.run(options)
} }
export async function fetchTaskLogs(params) { export async function fetchTaskLogs(params) {
@@ -62,7 +27,6 @@ export async function fetchTaskLogs(params) {
export async function clearOldTaskLogs(days) { export async function clearOldTaskLogs(days) {
const { data } = await api.post('/task/logs/clear', { days }) const { data } = await api.post('/task/logs/clear', { days })
taskStatsGetter.clear()
runningTasksGetter.clear()
return data return data
} }

View File

@@ -0,0 +1,26 @@
import { api } from './client'
export async function fetchUpdateStatus() {
const { data } = await api.get('/update/status')
return data
}
export async function fetchUpdateResult() {
const { data } = await api.get('/update/result')
return data
}
export async function fetchUpdateLog(params = {}) {
const { data } = await api.get('/update/log', { params })
return data
}
export async function requestUpdateCheck() {
const { data } = await api.post('/update/check', {})
return data
}
export async function requestUpdateRun(payload = {}) {
const { data } = await api.post('/update/run', payload)
return data
}

View File

@@ -0,0 +1 @@
<svg xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" class="iconify iconify--logos" width="37.07" height="36" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 198"><path fill="#41B883" d="M204.8 0H256L128 220.8L0 0h97.92L128 51.2L157.44 0h47.36Z"></path><path fill="#41B883" d="m0 0l128 220.8L256 0h-51.2L128 132.48L50.56 0H0Z"></path><path fill="#35495E" d="M50.56 0L128 133.12L204.8 0h-47.36L128 51.2L97.92 0H50.56Z"></path></svg>

After

Width:  |  Height:  |  Size: 496 B

View File

@@ -1,164 +0,0 @@
<script setup>
const props = defineProps({
items: {
type: Array,
default: () => [],
},
loading: {
type: Boolean,
default: false,
},
minWidth: {
type: Number,
default: 180,
},
})
</script>
<template>
<div class="metric-grid" :style="{ '--metric-min': `${minWidth}px` }">
<div
v-for="item in items"
:key="item?.key || item?.label"
class="metric-card"
:class="`metric-tone--${item?.tone || 'blue'}`"
>
<div class="metric-top">
<div v-if="item?.icon" class="metric-icon">
<el-icon><component :is="item.icon" /></el-icon>
</div>
<div class="metric-label">{{ item?.label || '-' }}</div>
</div>
<div class="metric-value">
<el-skeleton v-if="loading" :rows="1" animated />
<template v-else>{{ item?.value ?? 0 }}</template>
</div>
<div v-if="item?.hint || item?.sub" class="metric-hint app-muted">{{ item?.hint || item?.sub }}</div>
</div>
</div>
</template>
<style scoped>
.metric-grid {
display: grid;
grid-template-columns: repeat(auto-fill, minmax(var(--metric-min), 1fr));
gap: 12px;
}
.metric-card {
position: relative;
overflow: hidden;
border-radius: 14px;
border: 1px solid var(--app-border);
background: linear-gradient(180deg, rgba(255, 255, 255, 0.98), rgba(250, 252, 255, 0.9));
box-shadow: var(--app-shadow-soft);
padding: 13px 14px;
min-height: 104px;
}
.metric-card::before {
content: '';
position: absolute;
top: 0;
left: 0;
right: 0;
height: 3px;
background: var(--metric-top, #3b82f6);
}
.metric-top {
display: flex;
align-items: center;
gap: 8px;
}
.metric-icon {
width: 26px;
height: 26px;
border-radius: 8px;
display: flex;
align-items: center;
justify-content: center;
background: var(--metric-icon-bg, rgba(59, 130, 246, 0.12));
color: var(--metric-icon-color, #1d4ed8);
}
.metric-label {
font-size: 12px;
color: #475569;
font-weight: 700;
line-height: 1.4;
}
.metric-value {
margin-top: 10px;
font-size: 26px;
line-height: 1.05;
font-weight: 900;
color: #0f172a;
}
.metric-hint {
margin-top: 8px;
font-size: 12px;
line-height: 1.4;
}
.metric-tone--blue {
--metric-top: linear-gradient(90deg, #3b82f6, #06b6d4);
--metric-icon-bg: rgba(59, 130, 246, 0.14);
--metric-icon-color: #1d4ed8;
}
.metric-tone--green {
--metric-top: linear-gradient(90deg, #10b981, #22c55e);
--metric-icon-bg: rgba(16, 185, 129, 0.14);
--metric-icon-color: #047857;
}
.metric-tone--purple {
--metric-top: linear-gradient(90deg, #8b5cf6, #ec4899);
--metric-icon-bg: rgba(139, 92, 246, 0.14);
--metric-icon-color: #6d28d9;
}
.metric-tone--orange {
--metric-top: linear-gradient(90deg, #f59e0b, #f97316);
--metric-icon-bg: rgba(245, 158, 11, 0.14);
--metric-icon-color: #b45309;
}
.metric-tone--red {
--metric-top: linear-gradient(90deg, #ef4444, #f43f5e);
--metric-icon-bg: rgba(239, 68, 68, 0.14);
--metric-icon-color: #b91c1c;
}
.metric-tone--cyan {
--metric-top: linear-gradient(90deg, #06b6d4, #3b82f6);
--metric-icon-bg: rgba(6, 182, 212, 0.14);
--metric-icon-color: #0e7490;
}
@media (max-width: 768px) {
.metric-grid {
grid-template-columns: repeat(2, minmax(0, 1fr));
}
.metric-card {
min-height: 96px;
}
.metric-value {
font-size: 22px;
}
}
@media (max-width: 480px) {
.metric-grid {
grid-template-columns: 1fr;
}
}
</style>

View File

@@ -0,0 +1,51 @@
<script setup>
import { computed } from 'vue'
const props = defineProps({
stats: { type: Object, required: true },
loading: { type: Boolean, default: false },
})
const items = computed(() => [
{ key: 'total_users', label: '总用户数' },
{ key: 'new_users_today', label: '今日注册' },
{ key: 'new_users_7d', label: '近7天注册' },
{ key: 'total_accounts', label: '总账号数' },
{ key: 'vip_users', label: 'VIP用户' },
])
</script>
<template>
<el-row :gutter="12" class="stats-row">
<el-col v-for="it in items" :key="it.key" :xs="12" :sm="8" :md="6" :lg="4" :xl="4">
<el-card shadow="never" class="stat-card" :body-style="{ padding: '14px' }">
<div class="stat-value">
<el-skeleton v-if="loading" :rows="1" animated />
<template v-else>{{ stats?.[it.key] ?? 0 }}</template>
</div>
<div class="stat-label">{{ it.label }}</div>
</el-card>
</el-col>
</el-row>
</template>
<style scoped>
.stats-row {
margin-bottom: 14px;
}
.stat-card {
border-radius: var(--app-radius);
border: 1px solid var(--app-border);
box-shadow: var(--app-shadow);
}
.stat-value {
font-size: 22px;
font-weight: 800;
line-height: 1.1;
}
.stat-label {
margin-top: 6px;
font-size: 12px;
color: var(--app-muted);
}
</style>

View File

@@ -17,7 +17,6 @@ import {
import { api } from '../api/client' import { api } from '../api/client'
import { fetchFeedbackStats } from '../api/feedbacks' import { fetchFeedbackStats } from '../api/feedbacks'
import { fetchSystemStats } from '../api/stats' import { fetchSystemStats } from '../api/stats'
import { clearCachedKdocsStatus, preloadKdocsStatus } from '../utils/kdocsStatusCache'
const route = useRoute() const route = useRoute()
const router = useRouter() const router = useRouter()
@@ -26,17 +25,16 @@ const stats = ref({})
const adminUsername = computed(() => stats.value?.admin_username || '') const adminUsername = computed(() => stats.value?.admin_username || '')
async function refreshStats(options = {}) { async function refreshStats() {
stats.value = await fetchSystemStats(options) try {
stats.value = await fetchSystemStats()
} finally {
}
} }
const loadingBadges = ref(false) const loadingBadges = ref(false)
const pendingFeedbackCount = ref(0) const pendingFeedbackCount = ref(0)
let badgeTimer
const BADGE_POLL_ACTIVE_MS = 60_000
const BADGE_POLL_HIDDEN_MS = 180_000
let badgeTimer = null
async function refreshNavBadges(partial = null) { async function refreshNavBadges(partial = null) {
if (partial && typeof partial === 'object') { if (partial && typeof partial === 'object') {
@@ -57,34 +55,6 @@ async function refreshNavBadges(partial = null) {
} }
} }
function isPageHidden() {
if (typeof document === 'undefined') return false
return document.visibilityState === 'hidden'
}
function currentBadgePollDelay() {
return isPageHidden() ? BADGE_POLL_HIDDEN_MS : BADGE_POLL_ACTIVE_MS
}
function stopBadgePolling() {
if (!badgeTimer) return
window.clearTimeout(badgeTimer)
badgeTimer = null
}
function scheduleBadgePolling() {
stopBadgePolling()
badgeTimer = window.setTimeout(async () => {
badgeTimer = null
await refreshNavBadges().catch(() => {})
scheduleBadgePolling()
}, currentBadgePollDelay())
}
function onVisibilityChange() {
scheduleBadgePolling()
}
provide('refreshStats', refreshStats) provide('refreshStats', refreshStats)
provide('adminStats', stats) provide('adminStats', stats)
provide('refreshNavBadges', refreshNavBadges) provide('refreshNavBadges', refreshNavBadges)
@@ -103,19 +73,14 @@ onMounted(async () => {
mediaQuery.addEventListener?.('change', syncIsMobile) mediaQuery.addEventListener?.('change', syncIsMobile)
syncIsMobile() syncIsMobile()
// 后台登录后预加载金山文档登录状态,系统配置页可直接复用缓存。
void preloadKdocsStatus({ maxAgeMs: 60_000, silent: true }).catch(() => {})
await refreshStats() await refreshStats()
await refreshNavBadges() await refreshNavBadges()
scheduleBadgePolling() badgeTimer = window.setInterval(refreshNavBadges, 60_000)
window.addEventListener('visibilitychange', onVisibilityChange)
}) })
onBeforeUnmount(() => { onBeforeUnmount(() => {
mediaQuery?.removeEventListener?.('change', syncIsMobile) mediaQuery?.removeEventListener?.('change', syncIsMobile)
stopBadgePolling() window.clearInterval(badgeTimer)
window.removeEventListener('visibilitychange', onVisibilityChange)
}) })
const menuItems = [ const menuItems = [
@@ -141,30 +106,19 @@ function badgeFor(item) {
} }
async function logout() { async function logout() {
let confirmed = false
try { try {
await ElMessageBox.confirm('确定退出管理员登录吗?', '退出登录', { await ElMessageBox.confirm('确定退出管理员登录吗?', '退出登录', {
confirmButtonText: '退出', confirmButtonText: '退出',
cancelButtonText: '取消', cancelButtonText: '取消',
type: 'warning', type: 'warning',
}) })
confirmed = true } catch {
} catch (error) { return
const reason = String(error || '').toLowerCase()
if (reason === 'cancel' || reason === 'close') return
try {
confirmed = window.confirm('确定退出管理员登录吗?')
} catch {
confirmed = false
}
} }
if (!confirmed) return
try { try {
await api.post('/logout') await api.post('/logout')
} finally { } finally {
clearCachedKdocsStatus()
window.location.href = '/yuyx' window.location.href = '/yuyx'
} }
} }
@@ -208,27 +162,25 @@ async function go(path) {
<span class="app-muted">管理员</span> <span class="app-muted">管理员</span>
<strong>{{ adminUsername || '-' }}</strong> <strong>{{ adminUsername || '-' }}</strong>
</div> </div>
<el-button type="primary" plain class="logout-btn" @click="logout">退出</el-button> <el-button type="primary" plain @click="logout">退出</el-button>
</div> </div>
</el-header> </el-header>
<el-main class="layout-main"> <el-main class="layout-main">
<div class="main-shell"> <Suspense>
<Suspense> <template #default>
<template #default> <RouterView />
<RouterView /> </template>
</template> <template #fallback>
<template #fallback> <el-card shadow="never" :body-style="{ padding: '16px' }" class="fallback-card">
<el-card shadow="never" :body-style="{ padding: '16px' }" class="fallback-card"> <el-skeleton :rows="5" animated />
<el-skeleton :rows="5" animated /> </el-card>
</el-card> </template>
</template> </Suspense>
</Suspense>
</div>
</el-main> </el-main>
</el-container> </el-container>
<el-drawer v-model="drawerOpen" size="min(82vw, 280px)" direction="ltr" :with-header="false"> <el-drawer v-model="drawerOpen" size="240px" :with-header="false">
<div class="drawer-brand"> <div class="drawer-brand">
<div class="brand-title">后台管理</div> <div class="brand-title">后台管理</div>
<div class="brand-sub app-muted">知识管理平台</div> <div class="brand-sub app-muted">知识管理平台</div>
@@ -252,58 +204,31 @@ async function go(path) {
} }
.layout-aside { .layout-aside {
background: linear-gradient(180deg, rgba(255, 255, 255, 0.98), rgba(248, 250, 252, 0.94)); background: #ffffff;
border-right: 1px solid var(--app-border); border-right: 1px solid var(--app-border);
box-shadow: 4px 0 16px rgba(15, 23, 42, 0.04);
}
.brand,
.drawer-brand {
padding: 18px 16px 14px;
} }
.brand { .brand {
border-bottom: 1px solid rgba(15, 23, 42, 0.06); padding: 18px 16px 10px;
}
.drawer-brand {
padding: 18px 16px 10px;
} }
.brand-title { .brand-title {
font-size: 16px; font-size: 15px;
font-weight: 800; font-weight: 800;
letter-spacing: 0.2px; letter-spacing: 0.2px;
} }
.brand-sub { .brand-sub {
margin-top: 4px; margin-top: 2px;
font-size: 12px; font-size: 12px;
} }
.aside-menu { .aside-menu {
border-right: none; border-right: none;
padding: 8px;
background: transparent;
}
.aside-menu :deep(.el-menu-item) {
height: 42px;
line-height: 42px;
margin: 3px 0;
border-radius: 10px;
color: #334155;
font-weight: 600;
}
.aside-menu :deep(.el-menu-item .el-icon) {
margin-right: 10px;
}
.aside-menu :deep(.el-menu-item:hover) {
background: rgba(59, 130, 246, 0.08);
color: #1d4ed8;
}
.aside-menu :deep(.el-menu-item.is-active) {
background: linear-gradient(135deg, rgba(37, 99, 235, 0.12), rgba(124, 58, 237, 0.1));
color: #1e40af;
} }
.menu-label { .menu-label {
@@ -318,22 +243,16 @@ async function go(path) {
} }
.fallback-card { .fallback-card {
min-height: 160px; border-radius: var(--app-radius);
border-radius: var(--app-radius-lg);
border: 1px solid var(--app-border); border: 1px solid var(--app-border);
} }
.layout-header { .layout-header {
position: sticky;
top: 0;
z-index: 20;
display: flex; display: flex;
align-items: center; align-items: center;
justify-content: space-between; justify-content: space-between;
gap: 12px; gap: 12px;
height: 58px; background: rgba(246, 247, 251, 0.6);
padding: 0 18px;
background: rgba(255, 255, 255, 0.78);
backdrop-filter: saturate(180%) blur(10px); backdrop-filter: saturate(180%) blur(10px);
border-bottom: 1px solid var(--app-border); border-bottom: 1px solid var(--app-border);
} }
@@ -346,7 +265,7 @@ async function go(path) {
} }
.header-title { .header-title {
font-size: 15px; font-size: 14px;
font-weight: 800; font-weight: 800;
white-space: nowrap; white-space: nowrap;
overflow: hidden; overflow: hidden;
@@ -369,33 +288,18 @@ async function go(path) {
align-items: baseline; align-items: baseline;
gap: 8px; gap: 8px;
font-size: 13px; font-size: 13px;
color: #334155;
}
.admin-name strong {
color: #0f172a;
font-weight: 800;
}
.logout-btn {
min-width: 74px;
} }
.layout-main { .layout-main {
padding: 18px; padding: 16px;
}
.main-shell {
width: 100%;
max-width: 1600px;
margin: 0 auto;
} }
@media (max-width: 768px) { @media (max-width: 768px) {
.layout-header { .layout-header {
flex-wrap: wrap; flex-wrap: wrap;
height: auto; height: auto;
padding: 10px 12px; padding-top: 10px;
padding-bottom: 10px;
} }
.header-right { .header-right {
@@ -407,10 +311,6 @@ async function go(path) {
display: none; display: none;
} }
.admin-name strong {
display: none;
}
.layout-main { .layout-main {
padding: 12px; padding: 12px;
} }

View File

@@ -193,6 +193,9 @@ onMounted(load)
<div class="page-stack"> <div class="page-stack">
<div class="app-page-title"> <div class="app-page-title">
<h2>公告管理</h2> <h2>公告管理</h2>
<div>
<el-button @click="load">刷新</el-button>
</div>
</div> </div>
<el-card shadow="never" :body-style="{ padding: '16px' }" class="card"> <el-card shadow="never" :body-style="{ padding: '16px' }" class="card">
@@ -288,22 +291,18 @@ onMounted(load)
.page-stack { .page-stack {
display: flex; display: flex;
flex-direction: column; flex-direction: column;
gap: 14px; gap: 12px;
min-width: 0;
} }
.card { .card {
border-radius: var(--app-radius); border-radius: var(--app-radius);
border: 1px solid var(--app-border); border: 1px solid var(--app-border);
background: var(--app-card-bg);
box-shadow: var(--app-shadow-soft);
} }
.section-title { .section-title {
margin: 0 0 12px; margin: 0 0 12px;
font-size: 15px; font-size: 14px;
font-weight: 800; font-weight: 800;
letter-spacing: 0.2px;
} }
.help { .help {
@@ -365,9 +364,6 @@ onMounted(load)
.table-wrap { .table-wrap {
overflow-x: auto; overflow-x: auto;
border-radius: 10px;
border: 1px solid var(--app-border);
background: #fff;
} }
.ellipsis { .ellipsis {

View File

@@ -12,7 +12,6 @@ import {
testSmtpConfig, testSmtpConfig,
updateSmtpConfig, updateSmtpConfig,
} from '../api/smtp' } from '../api/smtp'
import MetricGrid from '../components/MetricGrid.vue'
// ========== 全局设置 ========== // ========== 全局设置 ==========
const emailSettingsLoading = ref(false) const emailSettingsLoading = ref(false)
@@ -488,21 +487,6 @@ function emailLogUserLabel(row) {
return '系统' return '系统'
} }
const emailSummaryCards = computed(() => [
{ key: 'total_sent', label: '总发送', value: emailStats.value?.total_sent || 0, tone: 'blue' },
{ key: 'total_success', label: '成功', value: emailStats.value?.total_success || 0, tone: 'green' },
{ key: 'total_failed', label: '失败', value: emailStats.value?.total_failed || 0, tone: 'red' },
{ key: 'success_rate', label: '成功率', value: `${emailStats.value?.success_rate || 0}%`, tone: 'purple' },
])
const emailTypeCards = computed(() => [
{ key: 'register_sent', label: '注册验证', value: emailStats.value?.register_sent || 0, tone: 'cyan' },
{ key: 'reset_sent', label: '密码重置', value: emailStats.value?.reset_sent || 0, tone: 'orange' },
{ key: 'bind_sent', label: '邮箱绑定', value: emailStats.value?.bind_sent || 0, tone: 'purple' },
{ key: 'task_complete_sent', label: '任务完成', value: emailStats.value?.task_complete_sent || 0, tone: 'green' },
])
async function loadEmailStats() { async function loadEmailStats() {
emailStatsLoading.value = true emailStatsLoading.value = true
try { try {
@@ -590,6 +574,9 @@ onMounted(refreshAll)
<div class="page-stack"> <div class="page-stack">
<div class="app-page-title"> <div class="app-page-title">
<h2>邮件配置</h2> <h2>邮件配置</h2>
<div class="toolbar">
<el-button @click="refreshAll">刷新</el-button>
</div>
</div> </div>
<el-card shadow="never" :body-style="{ padding: '16px' }" class="card" v-loading="emailSettingsLoading"> <el-card shadow="never" :body-style="{ padding: '16px' }" class="card" v-loading="emailSettingsLoading">
@@ -681,10 +668,38 @@ onMounted(refreshAll)
<el-card shadow="never" :body-style="{ padding: '16px' }" class="card" v-loading="emailStatsLoading"> <el-card shadow="never" :body-style="{ padding: '16px' }" class="card" v-loading="emailStatsLoading">
<h3 class="section-title">邮件发送统计</h3> <h3 class="section-title">邮件发送统计</h3>
<MetricGrid :items="emailSummaryCards" :loading="emailStatsLoading" :min-width="160" /> <el-row :gutter="12">
<el-col :xs="12" :sm="6">
<el-card shadow="never" class="stat-card" :body-style="{ padding: '14px' }">
<div class="stat-value">{{ emailStats.total_sent || 0 }}</div>
<div class="stat-label">总发送</div>
</el-card>
</el-col>
<el-col :xs="12" :sm="6">
<el-card shadow="never" class="stat-card" :body-style="{ padding: '14px' }">
<div class="stat-value ok">{{ emailStats.total_success || 0 }}</div>
<div class="stat-label">成功</div>
</el-card>
</el-col>
<el-col :xs="12" :sm="6">
<el-card shadow="never" class="stat-card" :body-style="{ padding: '14px' }">
<div class="stat-value err">{{ emailStats.total_failed || 0 }}</div>
<div class="stat-label">失败</div>
</el-card>
</el-col>
<el-col :xs="12" :sm="6">
<el-card shadow="never" class="stat-card" :body-style="{ padding: '14px' }">
<div class="stat-value">{{ emailStats.success_rate || 0 }}%</div>
<div class="stat-label">成功率</div>
</el-card>
</el-col>
</el-row>
<div class="sub-stats"> <div class="sub-stats">
<MetricGrid :items="emailTypeCards" :loading="emailStatsLoading" :min-width="150" /> <el-tag effect="light">注册验证 {{ emailStats.register_sent || 0 }}</el-tag>
<el-tag effect="light">密码重置 {{ emailStats.reset_sent || 0 }}</el-tag>
<el-tag effect="light">邮箱绑定 {{ emailStats.bind_sent || 0 }}</el-tag>
<el-tag effect="light">任务完成 {{ emailStats.task_complete_sent || 0 }}</el-tag>
</div> </div>
<div class="help app-muted">最后更新{{ emailStats.last_updated || '-' }}</div> <div class="help app-muted">最后更新{{ emailStats.last_updated || '-' }}</div>
@@ -838,8 +853,7 @@ onMounted(refreshAll)
.page-stack { .page-stack {
display: flex; display: flex;
flex-direction: column; flex-direction: column;
gap: 14px; gap: 12px;
min-width: 0;
} }
.toolbar { .toolbar {
@@ -852,8 +866,6 @@ onMounted(refreshAll)
.card { .card {
border-radius: var(--app-radius); border-radius: var(--app-radius);
border: 1px solid var(--app-border); border: 1px solid var(--app-border);
background: var(--app-card-bg);
box-shadow: var(--app-shadow-soft);
} }
.section-head { .section-head {
@@ -867,9 +879,8 @@ onMounted(refreshAll)
.section-title { .section-title {
margin: 0; margin: 0;
font-size: 15px; font-size: 14px;
font-weight: 800; font-weight: 800;
letter-spacing: 0.2px;
} }
.help { .help {
@@ -880,13 +891,37 @@ onMounted(refreshAll)
.table-wrap { .table-wrap {
overflow-x: auto; overflow-x: auto;
border-radius: 10px;
border: 1px solid var(--app-border);
background: #fff;
} }
.stat-card {
border-radius: var(--app-radius);
border: 1px solid var(--app-border);
}
.stat-value {
font-size: 20px;
font-weight: 900;
line-height: 1.1;
}
.stat-label {
margin-top: 6px;
font-size: 12px;
color: var(--app-muted);
}
.ok {
color: #047857;
}
.err {
color: #b91c1c;
}
.sub-stats { .sub-stats {
display: flex;
flex-wrap: wrap;
gap: 8px;
margin-top: 12px; margin-top: 12px;
} }

View File

@@ -1,9 +1,8 @@
<script setup> <script setup>
import { computed, inject, onMounted, ref } from 'vue' import { inject, onMounted, ref } from 'vue'
import { ElMessage, ElMessageBox } from 'element-plus' import { ElMessage, ElMessageBox } from 'element-plus'
import { closeFeedback, deleteFeedback, fetchFeedbacks, replyFeedback } from '../api/feedbacks' import { closeFeedback, deleteFeedback, fetchFeedbacks, replyFeedback } from '../api/feedbacks'
import MetricGrid from '../components/MetricGrid.vue'
const refreshNavBadges = inject('refreshNavBadges', null) const refreshNavBadges = inject('refreshNavBadges', null)
@@ -19,13 +18,6 @@ const statusOptions = [
{ label: '已关闭', value: 'closed' }, { label: '已关闭', value: 'closed' },
] ]
const metricItems = computed(() => [
{ key: 'total', label: '总反馈', value: stats.value.total || 0, tone: 'blue' },
{ key: 'pending', label: '待处理', value: stats.value.pending || 0, tone: 'orange' },
{ key: 'replied', label: '已回复', value: stats.value.replied || 0, tone: 'green' },
{ key: 'closed', label: '已关闭', value: stats.value.closed || 0, tone: 'purple' },
])
function statusMeta(status) { function statusMeta(status) {
if (status === 'pending') return { label: '待处理', type: 'warning' } if (status === 'pending') return { label: '待处理', type: 'warning' }
if (status === 'replied') return { label: '已回复', type: 'success' } if (status === 'replied') return { label: '已回复', type: 'success' }
@@ -125,17 +117,38 @@ onMounted(load)
<el-select v-model="statusFilter" style="width: 160px" @change="load"> <el-select v-model="statusFilter" style="width: 160px" @change="load">
<el-option v-for="o in statusOptions" :key="o.value" :label="o.label" :value="o.value" /> <el-option v-for="o in statusOptions" :key="o.value" :label="o.label" :value="o.value" />
</el-select> </el-select>
<el-button @click="load">刷新</el-button>
</div> </div>
</div> </div>
<MetricGrid :items="metricItems" :loading="loading" :min-width="165" /> <el-row :gutter="12">
<el-col :xs="12" :sm="6">
<el-card shadow="never" class="stat-card" :body-style="{ padding: '14px' }">
<div class="stat-value">{{ stats.total || 0 }}</div>
<div class="stat-label">总计</div>
</el-card>
</el-col>
<el-col :xs="12" :sm="6">
<el-card shadow="never" class="stat-card" :body-style="{ padding: '14px' }">
<div class="stat-value warn">{{ stats.pending || 0 }}</div>
<div class="stat-label">待处理</div>
</el-card>
</el-col>
<el-col :xs="12" :sm="6">
<el-card shadow="never" class="stat-card" :body-style="{ padding: '14px' }">
<div class="stat-value ok">{{ stats.replied || 0 }}</div>
<div class="stat-label">已回复</div>
</el-card>
</el-col>
<el-col :xs="12" :sm="6">
<el-card shadow="never" class="stat-card" :body-style="{ padding: '14px' }">
<div class="stat-value">{{ stats.closed || 0 }}</div>
<div class="stat-label">已关闭</div>
</el-card>
</el-col>
</el-row>
<el-card shadow="never" :body-style="{ padding: '16px' }" class="card"> <el-card shadow="never" :body-style="{ padding: '16px' }" class="card">
<div class="section-head">
<h3 class="section-title">反馈列表</h3>
<div class="app-muted"> {{ list.length }} 当前筛选</div>
</div>
<div class="table-wrap"> <div class="table-wrap">
<el-table :data="list" v-loading="loading" style="width: 100%"> <el-table :data="list" v-loading="loading" style="width: 100%">
<el-table-column prop="id" label="ID" width="80" /> <el-table-column prop="id" label="ID" width="80" />
@@ -191,44 +204,43 @@ onMounted(load)
.page-stack { .page-stack {
display: flex; display: flex;
flex-direction: column; flex-direction: column;
gap: 14px; gap: 12px;
min-width: 0;
} }
.toolbar { .toolbar {
display: flex; display: flex;
gap: 10px; gap: 10px;
align-items: center; align-items: center;
flex-wrap: wrap;
} }
.card { .card,
.stat-card {
border-radius: var(--app-radius); border-radius: var(--app-radius);
border: 1px solid var(--app-border); border: 1px solid var(--app-border);
background: var(--app-card-bg);
box-shadow: var(--app-shadow-soft);
} }
.section-head { .stat-value {
display: flex; font-size: 20px;
align-items: center;
justify-content: space-between;
gap: 12px;
margin-bottom: 12px;
flex-wrap: wrap;
}
.section-title {
margin: 0;
font-size: 15px;
font-weight: 800; font-weight: 800;
line-height: 1.1;
}
.stat-label {
margin-top: 6px;
font-size: 12px;
color: var(--app-muted);
}
.warn {
color: #b45309;
}
.ok {
color: #047857;
} }
.table-wrap { .table-wrap {
overflow-x: auto; overflow-x: auto;
border-radius: 10px;
border: 1px solid var(--app-border);
background: #fff;
} }
.ellipsis { .ellipsis {

View File

@@ -143,6 +143,9 @@ onMounted(async () => {
<div class="page-stack"> <div class="page-stack">
<div class="app-page-title"> <div class="app-page-title">
<h2>任务日志</h2> <h2>任务日志</h2>
<div class="toolbar">
<el-button @click="load">刷新</el-button>
</div>
</div> </div>
<el-card shadow="never" :body-style="{ padding: '16px' }" class="card"> <el-card shadow="never" :body-style="{ padding: '16px' }" class="card">
@@ -246,15 +249,12 @@ onMounted(async () => {
.page-stack { .page-stack {
display: flex; display: flex;
flex-direction: column; flex-direction: column;
gap: 14px; gap: 12px;
min-width: 0;
} }
.card { .card {
border-radius: var(--app-radius); border-radius: var(--app-radius);
border: 1px solid var(--app-border); border: 1px solid var(--app-border);
background: var(--app-card-bg);
box-shadow: var(--app-shadow-soft);
} }
.filters { .filters {
@@ -266,9 +266,6 @@ onMounted(async () => {
.table-wrap { .table-wrap {
overflow-x: auto; overflow-x: auto;
border-radius: 10px;
border: 1px solid var(--app-border);
background: #fff;
} }
.ellipsis { .ellipsis {

File diff suppressed because it is too large Load Diff

View File

@@ -16,7 +16,6 @@ import {
unbanIp, unbanIp,
unbanUser, unbanUser,
} from '../api/security' } from '../api/security'
import MetricGrid from '../components/MetricGrid.vue'
const pageSize = 20 const pageSize = 20
@@ -120,27 +119,9 @@ const threatTypeOptions = computed(() => {
const dashboardCards = computed(() => { const dashboardCards = computed(() => {
const d = dashboard.value || {} const d = dashboard.value || {}
return [ return [
{ { key: 'threat_events_24h', label: '最近24小时威胁事件', value: normalizeCount(d.threat_events_24h) },
key: 'threat_events_24h', { key: 'banned_ip_count', label: '当前封禁IP数', value: normalizeCount(d.banned_ip_count) },
label: '最近24小时威胁事件', { key: 'banned_user_count', label: '当前封禁用户数', value: normalizeCount(d.banned_user_count) },
value: normalizeCount(d.threat_events_24h),
tone: 'red',
hint: '用于衡量当前攻击面活跃度',
},
{
key: 'banned_ip_count',
label: '当前封禁 IP 数',
value: normalizeCount(d.banned_ip_count),
tone: 'orange',
hint: '自动与人工封禁总量',
},
{
key: 'banned_user_count',
label: '当前封禁用户数',
value: normalizeCount(d.banned_user_count),
tone: 'purple',
hint: '高风险账户拦截情况',
},
] ]
}) })
@@ -465,12 +446,23 @@ onMounted(async () => {
<div class="app-page-title"> <div class="app-page-title">
<h2>安全防护</h2> <h2>安全防护</h2>
<div class="toolbar"> <div class="toolbar">
<el-button @click="refreshAll">刷新</el-button>
<el-button type="warning" plain :loading="cleanupLoading" @click="onCleanup">清理过期记录</el-button> <el-button type="warning" plain :loading="cleanupLoading" @click="onCleanup">清理过期记录</el-button>
<el-button type="primary" @click="openBanDialog()">手动封禁</el-button> <el-button type="primary" @click="openBanDialog()">手动封禁</el-button>
</div> </div>
</div> </div>
<MetricGrid :items="dashboardCards" :loading="dashboardLoading" :min-width="220" /> <el-row :gutter="12" class="stats-row">
<el-col v-for="it in dashboardCards" :key="it.key" :xs="24" :sm="8" :md="8" :lg="8" :xl="8">
<el-card shadow="never" class="stat-card" :body-style="{ padding: '14px' }">
<div class="stat-value">
<el-skeleton v-if="dashboardLoading" :rows="1" animated />
<template v-else>{{ it.value }}</template>
</div>
<div class="stat-label">{{ it.label }}</div>
</el-card>
</el-col>
</el-row>
<el-card shadow="never" :body-style="{ padding: '16px' }" class="card"> <el-card shadow="never" :body-style="{ padding: '16px' }" class="card">
<el-tabs v-model="activeTab"> <el-tabs v-model="activeTab">
@@ -567,6 +559,7 @@ onMounted(async () => {
<el-tab-pane label="封禁管理" name="bans"> <el-tab-pane label="封禁管理" name="bans">
<div class="toolbar"> <div class="toolbar">
<el-button @click="loadBans">刷新封禁列表</el-button>
<el-button type="primary" @click="openBanDialog()">手动封禁</el-button> <el-button type="primary" @click="openBanDialog()">手动封禁</el-button>
</div> </div>
@@ -738,8 +731,7 @@ onMounted(async () => {
.page-stack { .page-stack {
display: flex; display: flex;
flex-direction: column; flex-direction: column;
gap: 14px; gap: 12px;
min-width: 0;
} }
.toolbar { .toolbar {
@@ -749,12 +741,13 @@ onMounted(async () => {
flex-wrap: wrap; flex-wrap: wrap;
} }
.stats-row {
margin-bottom: 2px;
}
.card { .card {
border-radius: var(--app-radius); border-radius: var(--app-radius);
border: 1px solid var(--app-border); border: 1px solid var(--app-border);
background: var(--app-card-bg);
box-shadow: var(--app-shadow-soft);
} }
.sub-card { .sub-card {
@@ -763,6 +756,23 @@ onMounted(async () => {
border: 1px solid var(--app-border); border: 1px solid var(--app-border);
} }
.stat-card {
border-radius: var(--app-radius);
border: 1px solid var(--app-border);
box-shadow: var(--app-shadow);
}
.stat-value {
font-size: 22px;
font-weight: 800;
line-height: 1.1;
}
.stat-label {
margin-top: 6px;
font-size: 12px;
color: var(--app-muted);
}
.filters { .filters {
display: flex; display: flex;
@@ -774,9 +784,6 @@ onMounted(async () => {
.table-wrap { .table-wrap {
overflow-x: auto; overflow-x: auto;
border-radius: 10px;
border: 1px solid var(--app-border);
background: #fff;
} }
.ellipsis { .ellipsis {

View File

@@ -1,31 +1,12 @@
<script setup> <script setup>
import { onMounted, ref } from 'vue' import { ref } from 'vue'
import { ElMessage, ElMessageBox } from 'element-plus' import { ElMessage, ElMessageBox } from 'element-plus'
import { import { logout, updateAdminPassword, updateAdminUsername } from '../api/admin'
createAdminPasskeyOptions,
createAdminPasskeyVerify,
deleteAdminPasskey,
fetchAdminPasskeys,
logout,
reportAdminPasskeyClientError,
updateAdminPassword,
updateAdminUsername,
} from '../api/admin'
import { createPasskey, getPasskeyClientErrorMessage, isPasskeyAvailable } from '../utils/passkey'
const username = ref('') const username = ref('')
const currentPassword = ref('')
const password = ref('') const password = ref('')
const confirmPassword = ref('')
const submitting = ref(false) const submitting = ref(false)
const passkeyLoading = ref(false)
const passkeyAddLoading = ref(false)
const passkeyDeviceName = ref('')
const passkeyItems = ref([])
const passkeyRegisterOptions = ref(null)
const passkeyRegisterOptionsAt = ref(0)
const PASSKEY_OPTIONS_PREFETCH_MAX_AGE_MS = 240000
function validateStrongPassword(value) { function validateStrongPassword(value) {
const text = String(value || '') const text = String(value || '')
@@ -76,31 +57,17 @@ async function saveUsername() {
} }
async function savePassword() { async function savePassword() {
const currentValue = currentPassword.value
const value = password.value const value = password.value
const confirmValue = confirmPassword.value
if (!currentValue) {
ElMessage.error('请输入当前密码')
return
}
if (!value) { if (!value) {
ElMessage.error('请输入新密码') ElMessage.error('请输入新密码')
return return
} }
const check = validateStrongPassword(value) const check = validateStrongPassword(value)
if (!check.ok) { if (!check.ok) {
ElMessage.error(check.message) ElMessage.error(check.message)
return return
} }
if (value !== confirmValue) {
ElMessage.error('两次输入的新密码不一致')
return
}
try { try {
await ElMessageBox.confirm('确定修改管理员密码吗?修改后需要重新登录。', '修改密码', { await ElMessageBox.confirm('确定修改管理员密码吗?修改后需要重新登录。', '修改密码', {
confirmButtonText: '确认修改', confirmButtonText: '确认修改',
@@ -113,11 +80,9 @@ async function savePassword() {
submitting.value = true submitting.value = true
try { try {
await updateAdminPassword({ currentPassword: currentValue, newPassword: value }) await updateAdminPassword(value)
ElMessage.success('密码修改成功,请重新登录') ElMessage.success('密码修改成功,请重新登录')
currentPassword.value = ''
password.value = '' password.value = ''
confirmPassword.value = ''
setTimeout(relogin, 1200) setTimeout(relogin, 1200)
} catch { } catch {
// handled by interceptor // handled by interceptor
@@ -125,117 +90,6 @@ async function savePassword() {
submitting.value = false submitting.value = false
} }
} }
async function loadPasskeys() {
passkeyLoading.value = true
try {
const data = await fetchAdminPasskeys()
passkeyItems.value = Array.isArray(data?.items) ? data.items : []
if (passkeyItems.value.length < 3) {
await prefetchPasskeyRegisterOptions()
} else {
passkeyRegisterOptions.value = null
passkeyRegisterOptionsAt.value = 0
}
} catch {
passkeyItems.value = []
passkeyRegisterOptions.value = null
passkeyRegisterOptionsAt.value = 0
} finally {
passkeyLoading.value = false
}
}
function getCachedPasskeyRegisterOptions() {
if (!passkeyRegisterOptions.value) return null
if (Date.now() - Number(passkeyRegisterOptionsAt.value || 0) > PASSKEY_OPTIONS_PREFETCH_MAX_AGE_MS) return null
return passkeyRegisterOptions.value
}
async function prefetchPasskeyRegisterOptions() {
try {
const res = await createAdminPasskeyOptions({})
passkeyRegisterOptions.value = res
passkeyRegisterOptionsAt.value = Date.now()
} catch {
passkeyRegisterOptions.value = null
passkeyRegisterOptionsAt.value = 0
}
}
async function addPasskey() {
if (!isPasskeyAvailable()) {
ElMessage.error('当前浏览器或环境不支持Passkey需 HTTPS')
return
}
if (passkeyItems.value.length >= 3) {
ElMessage.error('最多可绑定3台设备')
return
}
passkeyAddLoading.value = true
try {
let optionsRes = getCachedPasskeyRegisterOptions()
if (!optionsRes) {
optionsRes = await createAdminPasskeyOptions({})
}
const credential = await createPasskey(optionsRes?.publicKey || {})
await createAdminPasskeyVerify({ credential, device_name: passkeyDeviceName.value.trim() })
passkeyRegisterOptions.value = null
passkeyRegisterOptionsAt.value = 0
passkeyDeviceName.value = ''
ElMessage.success('Passkey设备添加成功')
await loadPasskeys()
} catch (e) {
try {
await reportAdminPasskeyClientError({
stage: 'register',
source: 'admin-settings',
name: e?.name || '',
message: e?.message || '',
code: e?.code || '',
user_agent: navigator.userAgent || '',
})
} catch {
// ignore report failure
}
passkeyRegisterOptions.value = null
passkeyRegisterOptionsAt.value = 0
await prefetchPasskeyRegisterOptions()
const data = e?.response?.data
const message =
data?.error ||
getPasskeyClientErrorMessage(e, 'Passkey注册')
ElMessage.error(message)
} finally {
passkeyAddLoading.value = false
}
}
async function removePasskey(item) {
try {
await ElMessageBox.confirm(`确定删除设备「${item?.device_name || '未命名设备'}」吗?`, '删除Passkey设备', {
confirmButtonText: '删除',
cancelButtonText: '取消',
type: 'warning',
})
} catch {
return
}
try {
await deleteAdminPasskey(item.id)
ElMessage.success('设备已删除')
await loadPasskeys()
} catch (e) {
const data = e?.response?.data
ElMessage.error(data?.error || '删除失败')
}
}
onMounted(() => {
loadPasskeys()
})
</script> </script>
<template> <template>
@@ -258,16 +112,6 @@ onMounted(() => {
<el-card shadow="never" :body-style="{ padding: '16px' }" class="card"> <el-card shadow="never" :body-style="{ padding: '16px' }" class="card">
<h3 class="section-title">修改管理员密码</h3> <h3 class="section-title">修改管理员密码</h3>
<el-form label-width="120px"> <el-form label-width="120px">
<el-form-item label="当前密码">
<el-input
v-model="currentPassword"
type="password"
show-password
placeholder="输入当前密码"
:disabled="submitting"
/>
</el-form-item>
<el-form-item label="新密码"> <el-form-item label="新密码">
<el-input <el-input
v-model="password" v-model="password"
@@ -277,60 +121,10 @@ onMounted(() => {
:disabled="submitting" :disabled="submitting"
/> />
</el-form-item> </el-form-item>
<el-form-item label="确认新密码">
<el-input
v-model="confirmPassword"
type="password"
show-password
placeholder="再次输入新密码"
:disabled="submitting"
/>
</el-form-item>
</el-form> </el-form>
<el-button type="primary" :loading="submitting" @click="savePassword">保存密码</el-button> <el-button type="primary" :loading="submitting" @click="savePassword">保存密码</el-button>
<div class="help">建议使用更强密码至少8位且包含字母与数字</div> <div class="help">建议使用更强密码至少8位且包含字母与数字</div>
</el-card> </el-card>
<el-card shadow="never" :body-style="{ padding: '16px' }" class="card">
<h3 class="section-title">Passkey设备</h3>
<el-alert
type="info"
:closable="false"
title="最多可绑定3台设备可用于管理员无密码登录。"
show-icon
class="help-alert"
/>
<el-form inline>
<el-form-item label="设备备注">
<el-input
v-model="passkeyDeviceName"
placeholder="例如值班iPhone / 办公Mac"
maxlength="40"
show-word-limit
/>
</el-form-item>
<el-form-item>
<el-button type="primary" :loading="passkeyAddLoading" @click="addPasskey">添加Passkey设备</el-button>
</el-form-item>
</el-form>
<div v-loading="passkeyLoading">
<el-empty v-if="passkeyItems.length === 0" description="暂无Passkey设备" />
<el-table v-else :data="passkeyItems" size="small" style="width: 100%">
<el-table-column prop="device_name" label="设备备注" min-width="160" />
<el-table-column prop="credential_id_preview" label="凭据ID" min-width="180" />
<el-table-column prop="last_used_at" label="最近使用" min-width="140" />
<el-table-column prop="created_at" label="创建时间" min-width="140" />
<el-table-column label="操作" width="100" fixed="right">
<template #default="{ row }">
<el-button type="danger" text @click="removePasskey(row)">删除</el-button>
</template>
</el-table-column>
</el-table>
</div>
</el-card>
</div> </div>
</template> </template>
@@ -338,22 +132,18 @@ onMounted(() => {
.page-stack { .page-stack {
display: flex; display: flex;
flex-direction: column; flex-direction: column;
gap: 14px; gap: 12px;
min-width: 0;
} }
.card { .card {
border-radius: var(--app-radius); border-radius: var(--app-radius);
border: 1px solid var(--app-border); border: 1px solid var(--app-border);
background: var(--app-card-bg);
box-shadow: var(--app-shadow-soft);
} }
.section-title { .section-title {
margin: 0 0 12px; margin: 0 0 12px;
font-size: 15px; font-size: 14px;
font-weight: 800; font-weight: 800;
letter-spacing: 0.2px;
} }
.help { .help {
@@ -361,8 +151,4 @@ onMounted(() => {
font-size: 12px; font-size: 12px;
color: var(--app-muted); color: var(--app-muted);
} }
.help-alert {
margin-bottom: 12px;
}
</style> </style>

View File

@@ -2,26 +2,35 @@
import { computed, onBeforeUnmount, onMounted, ref, watch } from 'vue' import { computed, onBeforeUnmount, onMounted, ref, watch } from 'vue'
import { ElMessage, ElMessageBox } from 'element-plus' import { ElMessage, ElMessageBox } from 'element-plus'
import { fetchSystemConfig, updateSystemConfig } from '../api/system' import { fetchSystemConfig, updateSystemConfig, executeScheduleNow } from '../api/system'
import { fetchKdocsQr, fetchKdocsStatus, clearKdocsLogin } from '../api/kdocs' import { fetchKdocsQr, fetchKdocsStatus, clearKdocsLogin } from '../api/kdocs'
import { fetchProxyConfig, testProxy, updateProxyConfig } from '../api/proxy' import { fetchProxyConfig, testProxy, updateProxyConfig } from '../api/proxy'
import { getCachedKdocsStatus, preloadKdocsStatus, updateCachedKdocsStatus } from '../utils/kdocsStatusCache'
const loading = ref(false) const loading = ref(false)
// 并发
const maxConcurrentGlobal = ref(2) const maxConcurrentGlobal = ref(2)
const maxConcurrentPerAccount = ref(1) const maxConcurrentPerAccount = ref(1)
const maxScreenshotConcurrent = ref(3) const maxScreenshotConcurrent = ref(3)
const dbSlowQueryMs = ref(120)
// 定时
const scheduleEnabled = ref(false)
const scheduleTime = ref('02:00')
const scheduleBrowseType = ref('应读')
const scheduleWeekdays = ref(['1', '2', '3', '4', '5', '6', '7'])
const scheduleScreenshotEnabled = ref(true)
// 代理
const proxyEnabled = ref(false) const proxyEnabled = ref(false)
const proxyApiUrl = ref('') const proxyApiUrl = ref('')
const proxyExpireMinutes = ref(3) const proxyExpireMinutes = ref(3)
// 自动审核
const autoApproveEnabled = ref(false) const autoApproveEnabled = ref(false)
const autoApproveHourlyLimit = ref(10) const autoApproveHourlyLimit = ref(10)
const autoApproveVipDays = ref(7) const autoApproveVipDays = ref(7)
// 金山文档上传
const kdocsEnabled = ref(false) const kdocsEnabled = ref(false)
const kdocsDocUrl = ref('') const kdocsDocUrl = ref('')
const kdocsDefaultUnit = ref('') const kdocsDefaultUnit = ref('')
@@ -29,47 +38,51 @@ const kdocsSheetName = ref('')
const kdocsSheetIndex = ref(0) const kdocsSheetIndex = ref(0)
const kdocsUnitColumn = ref('A') const kdocsUnitColumn = ref('A')
const kdocsImageColumn = ref('D') const kdocsImageColumn = ref('D')
const kdocsRowStart = ref(0)
const kdocsRowEnd = ref(0)
const kdocsAdminNotifyEnabled = ref(false) const kdocsAdminNotifyEnabled = ref(false)
const kdocsAdminNotifyEmail = ref('') const kdocsAdminNotifyEmail = ref('')
const kdocsStatus = ref({})
const initialKdocsStatus = getCachedKdocsStatus({ maxAgeMs: 10 * 60 * 1000 })
const kdocsStatus = ref(initialKdocsStatus || {})
const kdocsQrOpen = ref(false) const kdocsQrOpen = ref(false)
const kdocsQrImage = ref('') const kdocsQrImage = ref('')
const kdocsPolling = ref(false) const kdocsPolling = ref(false)
const kdocsStatusLoading = ref(false) const kdocsStatusLoading = ref(false)
const kdocsQrLoading = ref(false) const kdocsQrLoading = ref(false)
const kdocsClearLoading = ref(false) const kdocsClearLoading = ref(false)
const kdocsSilentRefreshing = ref(!initialKdocsStatus)
const kdocsActionHint = ref('') const kdocsActionHint = ref('')
let kdocsPollingTimer = null let kdocsPollingTimer = null
const weekdaysOptions = [
{ label: '周一', value: '1' },
{ label: '周二', value: '2' },
{ label: '周三', value: '3' },
{ label: '周四', value: '4' },
{ label: '周五', value: '5' },
{ label: '周六', value: '6' },
{ label: '周日', value: '7' },
]
const weekdayNames = {
1: '周一',
2: '周二',
3: '周三',
4: '周四',
5: '周五',
6: '周六',
7: '周日',
}
const scheduleWeekdayDisplay = computed(() =>
(scheduleWeekdays.value || [])
.map((d) => weekdayNames[Number(d)] || d)
.join('、'),
)
const kdocsActionBusy = computed( const kdocsActionBusy = computed(
() => kdocsStatusLoading.value || kdocsQrLoading.value || kdocsClearLoading.value, () => kdocsStatusLoading.value || kdocsQrLoading.value || kdocsClearLoading.value,
) )
const kdocsDetecting = computed( function normalizeBrowseType(value) {
() => kdocsSilentRefreshing.value || kdocsStatusLoading.value || kdocsPolling.value, if (String(value) === '注册前未读') return '注册前未读'
) return '应读'
}
const kdocsStatusText = computed(() => {
if (kdocsDetecting.value) return '检测中'
const status = kdocsStatus.value || {}
if (status?.logged_in === true || status?.last_login_ok === true) return '已登录'
if (status?.logged_in === false || status?.last_login_ok === false || status?.login_required === true) return '未登录'
if (status?.last_error) return '异常'
return '未知'
})
const kdocsStatusClass = computed(() => {
if (kdocsDetecting.value) return 'is-checking'
if (kdocsStatusText.value === '已登录') return 'is-online'
if (kdocsStatusText.value === '未登录') return 'is-offline'
if (kdocsStatusText.value === '异常') return 'is-error'
return 'is-unknown'
})
function setKdocsHint(message) { function setKdocsHint(message) {
if (!message) { if (!message) {
@@ -83,15 +96,26 @@ function setKdocsHint(message) {
async function loadAll() { async function loadAll() {
loading.value = true loading.value = true
try { try {
const [system, proxy] = await Promise.all([ const [system, proxy, kdocsInfo] = await Promise.all([
fetchSystemConfig(), fetchSystemConfig(),
fetchProxyConfig(), fetchProxyConfig(),
fetchKdocsStatus().catch(() => ({})),
]) ])
maxConcurrentGlobal.value = system.max_concurrent_global ?? 2 maxConcurrentGlobal.value = system.max_concurrent_global ?? 2
maxConcurrentPerAccount.value = system.max_concurrent_per_account ?? 1 maxConcurrentPerAccount.value = system.max_concurrent_per_account ?? 1
maxScreenshotConcurrent.value = system.max_screenshot_concurrent ?? 3 maxScreenshotConcurrent.value = system.max_screenshot_concurrent ?? 3
dbSlowQueryMs.value = system.db_slow_query_ms ?? 120
scheduleEnabled.value = (system.schedule_enabled ?? 0) === 1
scheduleTime.value = system.schedule_time || '02:00'
scheduleBrowseType.value = normalizeBrowseType(system.schedule_browse_type)
const weekdays = String(system.schedule_weekdays || '1,2,3,4,5,6,7')
.split(',')
.map((x) => x.trim())
.filter(Boolean)
scheduleWeekdays.value = weekdays.length ? weekdays : ['1', '2', '3', '4', '5', '6', '7']
scheduleScreenshotEnabled.value = (system.enable_screenshot ?? 1) === 1
autoApproveEnabled.value = (system.auto_approve_enabled ?? 0) === 1 autoApproveEnabled.value = (system.auto_approve_enabled ?? 0) === 1
autoApproveHourlyLimit.value = system.auto_approve_hourly_limit ?? 10 autoApproveHourlyLimit.value = system.auto_approve_hourly_limit ?? 10
@@ -108,42 +132,14 @@ async function loadAll() {
kdocsSheetIndex.value = system.kdocs_sheet_index ?? 0 kdocsSheetIndex.value = system.kdocs_sheet_index ?? 0
kdocsUnitColumn.value = (system.kdocs_unit_column || 'A').toUpperCase() kdocsUnitColumn.value = (system.kdocs_unit_column || 'A').toUpperCase()
kdocsImageColumn.value = (system.kdocs_image_column || 'D').toUpperCase() kdocsImageColumn.value = (system.kdocs_image_column || 'D').toUpperCase()
kdocsRowStart.value = system.kdocs_row_start ?? 0
kdocsRowEnd.value = system.kdocs_row_end ?? 0
kdocsAdminNotifyEnabled.value = (system.kdocs_admin_notify_enabled ?? 0) === 1 kdocsAdminNotifyEnabled.value = (system.kdocs_admin_notify_enabled ?? 0) === 1
kdocsAdminNotifyEmail.value = system.kdocs_admin_notify_email || '' kdocsAdminNotifyEmail.value = system.kdocs_admin_notify_email || ''
kdocsStatus.value = kdocsInfo || {}
} catch { } catch {
// handled by interceptor // handled by interceptor
} finally { } finally {
loading.value = false loading.value = false
} }
const cachedStatus = getCachedKdocsStatus({ maxAgeMs: 10 * 60 * 1000 })
if (cachedStatus) {
kdocsStatus.value = cachedStatus
kdocsSilentRefreshing.value = false
}
// 静默刷新金山登录状态,确保状态持续更新且不阻塞首屏。
void refreshKdocsStatusSilently()
}
async function refreshKdocsStatusSilently() {
if (kdocsSilentRefreshing.value || kdocsStatusLoading.value) return
kdocsSilentRefreshing.value = true
try {
const status = await preloadKdocsStatus({
force: false,
maxAgeMs: 60_000,
silent: true,
live: 0,
})
kdocsStatus.value = status || {}
} catch {
// silent mode
} finally {
kdocsSilentRefreshing.value = false
}
} }
async function saveConcurrency() { async function saveConcurrency() {
@@ -151,12 +147,11 @@ async function saveConcurrency() {
max_concurrent_global: Number(maxConcurrentGlobal.value), max_concurrent_global: Number(maxConcurrentGlobal.value),
max_concurrent_per_account: Number(maxConcurrentPerAccount.value), max_concurrent_per_account: Number(maxConcurrentPerAccount.value),
max_screenshot_concurrent: Number(maxScreenshotConcurrent.value), max_screenshot_concurrent: Number(maxScreenshotConcurrent.value),
db_slow_query_ms: Number(dbSlowQueryMs.value),
} }
try { try {
await ElMessageBox.confirm( await ElMessageBox.confirm(
`确定更新并发配置吗?\n\n全局并发数: ${payload.max_concurrent_global}\n单账号并发数: ${payload.max_concurrent_per_account}\n截图并发数: ${payload.max_screenshot_concurrent}\n慢 SQL 阈值: ${payload.db_slow_query_ms}ms`, `确定更新并发配置吗?\n\n全局并发数: ${payload.max_concurrent_global}\n单账号并发数: ${payload.max_concurrent_per_account}\n截图并发数: ${payload.max_screenshot_concurrent}`,
'保存并发配置', '保存并发配置',
{ confirmButtonText: '保存', cancelButtonText: '取消', type: 'warning' }, { confirmButtonText: '保存', cancelButtonText: '取消', type: 'warning' },
) )
@@ -172,6 +167,63 @@ async function saveConcurrency() {
} }
} }
async function saveSchedule() {
if (scheduleEnabled.value && (!scheduleWeekdays.value || scheduleWeekdays.value.length === 0)) {
ElMessage.error('请至少选择一个执行日期')
return
}
const payload = {
schedule_enabled: scheduleEnabled.value ? 1 : 0,
schedule_time: scheduleTime.value,
schedule_browse_type: scheduleBrowseType.value,
schedule_weekdays: (scheduleWeekdays.value || []).join(','),
enable_screenshot: scheduleScreenshotEnabled.value ? 1 : 0,
}
const screenshotText = scheduleScreenshotEnabled.value ? '截图' : '不截图'
const message = scheduleEnabled.value
? `确定启用定时任务吗?\n\n执行时间: 每天 ${payload.schedule_time}\n执行日期: ${scheduleWeekdayDisplay.value}\n浏览类型: ${payload.schedule_browse_type}\n截图: ${screenshotText}\n\n系统将自动执行所有账号的浏览任务`
: '确定关闭定时任务吗?'
try {
await ElMessageBox.confirm(message, '保存定时任务', {
confirmButtonText: '确认',
cancelButtonText: '取消',
type: 'warning',
})
} catch {
return
}
try {
const res = await updateSystemConfig(payload)
ElMessage.success(res?.message || (scheduleEnabled.value ? '定时任务已启用' : '定时任务已关闭'))
} catch {
// handled by interceptor
}
}
async function runScheduleNow() {
const msg = `确定要立即执行定时任务吗?\n\n这将执行所有账号的浏览任务\n浏览类型: ${scheduleBrowseType.value}\n\n注意无视定时时间和执行日期配置立即开始执行`
try {
await ElMessageBox.confirm(msg, '立即执行', {
confirmButtonText: '立即执行',
cancelButtonText: '取消',
type: 'warning',
})
} catch {
return
}
try {
const res = await executeScheduleNow()
ElMessage.success(res?.message || '定时任务已开始执行')
} catch {
// handled by interceptor
}
}
async function saveProxy() { async function saveProxy() {
if (proxyEnabled.value && !proxyApiUrl.value.trim()) { if (proxyEnabled.value && !proxyApiUrl.value.trim()) {
ElMessage.error('启用代理时API地址不能为空') ElMessage.error('启用代理时API地址不能为空')
@@ -192,47 +244,6 @@ async function saveProxy() {
} }
} }
async function onTestProxy() {
if (!proxyApiUrl.value.trim()) {
ElMessage.error('请先输入代理API地址')
return
}
try {
const res = await testProxy({ api_url: proxyApiUrl.value.trim() })
await ElMessageBox.alert(res?.message || '测试完成', '代理测试', { confirmButtonText: '知道了' })
} catch {
// handled by interceptor
}
}
async function saveAutoApprove() {
const hourly = Number(autoApproveHourlyLimit.value)
const vipDays = Number(autoApproveVipDays.value)
if (!Number.isFinite(hourly) || hourly < 1) {
ElMessage.error('每小时注册限制必须大于0')
return
}
if (!Number.isFinite(vipDays) || vipDays < 0) {
ElMessage.error('VIP天数不能为负数')
return
}
const payload = {
auto_approve_enabled: autoApproveEnabled.value ? 1 : 0,
auto_approve_hourly_limit: hourly,
auto_approve_vip_days: vipDays,
}
try {
const res = await updateSystemConfig(payload)
ElMessage.success(res?.message || '注册设置已保存')
} catch {
// handled by interceptor
}
}
async function saveKdocsConfig() { async function saveKdocsConfig() {
const payload = { const payload = {
kdocs_enabled: kdocsEnabled.value ? 1 : 0, kdocs_enabled: kdocsEnabled.value ? 1 : 0,
@@ -242,8 +253,6 @@ async function saveKdocsConfig() {
kdocs_sheet_index: Number(kdocsSheetIndex.value) || 0, kdocs_sheet_index: Number(kdocsSheetIndex.value) || 0,
kdocs_unit_column: kdocsUnitColumn.value.trim().toUpperCase(), kdocs_unit_column: kdocsUnitColumn.value.trim().toUpperCase(),
kdocs_image_column: kdocsImageColumn.value.trim().toUpperCase(), kdocs_image_column: kdocsImageColumn.value.trim().toUpperCase(),
kdocs_row_start: Number(kdocsRowStart.value) || 0,
kdocs_row_end: Number(kdocsRowEnd.value) || 0,
kdocs_admin_notify_enabled: kdocsAdminNotifyEnabled.value ? 1 : 0, kdocs_admin_notify_enabled: kdocsAdminNotifyEnabled.value ? 1 : 0,
kdocs_admin_notify_email: kdocsAdminNotifyEmail.value.trim(), kdocs_admin_notify_email: kdocsAdminNotifyEmail.value.trim(),
} }
@@ -261,9 +270,7 @@ async function refreshKdocsStatus() {
kdocsStatusLoading.value = true kdocsStatusLoading.value = true
setKdocsHint('正在刷新状态') setKdocsHint('正在刷新状态')
try { try {
const status = await fetchKdocsStatus({ live: 1 }) kdocsStatus.value = await fetchKdocsStatus({ live: 1 })
kdocsStatus.value = status || {}
updateCachedKdocsStatus(kdocsStatus.value)
setKdocsHint('状态已刷新') setKdocsHint('状态已刷新')
} catch { } catch {
setKdocsHint('刷新失败,请稍后重试') setKdocsHint('刷新失败,请稍后重试')
@@ -275,8 +282,7 @@ async function refreshKdocsStatus() {
async function pollKdocsStatus() { async function pollKdocsStatus() {
try { try {
const status = await fetchKdocsStatus({ live: 1 }) const status = await fetchKdocsStatus({ live: 1 })
kdocsStatus.value = status || {} kdocsStatus.value = status
updateCachedKdocsStatus(kdocsStatus.value)
const loggedIn = status?.logged_in === true || status?.last_login_ok === true const loggedIn = status?.logged_in === true || status?.last_login_ok === true
if (loggedIn) { if (loggedIn) {
ElMessage.success('扫码成功,已登录') ElMessage.success('扫码成功,已登录')
@@ -341,12 +347,6 @@ async function onClearKdocsLogin() {
await clearKdocsLogin() await clearKdocsLogin()
kdocsQrOpen.value = false kdocsQrOpen.value = false
kdocsQrImage.value = '' kdocsQrImage.value = ''
kdocsStatus.value = updateCachedKdocsStatus({
...(kdocsStatus.value || {}),
logged_in: false,
last_login_ok: false,
login_required: true,
})
ElMessage.success('登录态已清除') ElMessage.success('登录态已清除')
setKdocsHint('登录态已清除') setKdocsHint('登录态已清除')
await refreshKdocsStatus() await refreshKdocsStatus()
@@ -369,6 +369,47 @@ onBeforeUnmount(() => {
stopKdocsPolling() stopKdocsPolling()
}) })
async function onTestProxy() {
if (!proxyApiUrl.value.trim()) {
ElMessage.error('请先输入代理API地址')
return
}
try {
const res = await testProxy({ api_url: proxyApiUrl.value.trim() })
await ElMessageBox.alert(res?.message || '测试完成', '代理测试', { confirmButtonText: '知道了' })
} catch {
// handled by interceptor
}
}
async function saveAutoApprove() {
const hourly = Number(autoApproveHourlyLimit.value)
const vipDays = Number(autoApproveVipDays.value)
if (!Number.isFinite(hourly) || hourly < 1) {
ElMessage.error('每小时注册限制必须大于0')
return
}
if (!Number.isFinite(vipDays) || vipDays < 0) {
ElMessage.error('VIP天数不能为负数')
return
}
const payload = {
auto_approve_enabled: autoApproveEnabled.value ? 1 : 0,
auto_approve_hourly_limit: hourly,
auto_approve_vip_days: vipDays,
}
try {
const res = await updateSystemConfig(payload)
ElMessage.success(res?.message || '注册设置已保存')
} catch {
// handled by interceptor
}
}
onMounted(loadAll) onMounted(loadAll)
</script> </script>
@@ -376,107 +417,124 @@ onMounted(loadAll)
<div class="page-stack" v-loading="loading"> <div class="page-stack" v-loading="loading">
<div class="app-page-title"> <div class="app-page-title">
<h2>系统配置</h2> <h2>系统配置</h2>
</div> <div>
<el-button @click="loadAll">刷新</el-button>
<div class="config-grid">
<el-card shadow="never" :body-style="{ padding: '16px' }" class="card section-card">
<h3 class="section-title">并发配置</h3>
<div class="section-sub app-muted">控制任务与截图的并发资源上限</div>
<el-form label-width="122px">
<el-form-item label="全局最大并发数">
<el-input-number v-model="maxConcurrentGlobal" :min="1" :max="200" />
<div class="help">同时最多运行账号数浏览任务 API 执行资源占用较低</div>
</el-form-item>
<el-form-item label="单账号最大并发数">
<el-input-number v-model="maxConcurrentPerAccount" :min="1" :max="50" />
<div class="help">建议保持为 1避免同账号任务抢占</div>
</el-form-item>
<el-form-item label="截图最大并发数">
<el-input-number v-model="maxScreenshotConcurrent" :min="1" :max="50" />
<div class="help">截图资源占用较低可按机器性能逐步提高</div>
</el-form-item>
<el-form-item label="慢 SQL 阈值(ms)">
<el-input-number v-model="dbSlowQueryMs" :min="0" :max="60000" />
<div class="help">低于该阈值不会计入慢 SQL0 表示关闭慢 SQL 采样</div>
</el-form-item>
</el-form>
<div class="row-actions">
<el-button type="primary" @click="saveConcurrency">保存并发配置</el-button>
</div>
</el-card>
<el-card shadow="never" :body-style="{ padding: '16px' }" class="card section-card">
<h3 class="section-title">代理设置</h3>
<div class="section-sub app-muted">用于任务出网代理与连接有效期管理</div>
<el-form label-width="122px">
<el-form-item label="启用 IP 代理">
<el-switch v-model="proxyEnabled" />
<div class="help">开启后浏览任务通过代理访问失败自动重试</div>
</el-form-item>
<el-form-item label="代理 API 地址">
<el-input v-model="proxyApiUrl" placeholder="http://api.xxx/Tools/IP.ashx?..." />
<div class="help">API 应返回 `IP:PORT`123.45.67.89:8888</div>
</el-form-item>
<el-form-item label="有效期(分钟)">
<el-input-number v-model="proxyExpireMinutes" :min="1" :max="60" />
</el-form-item>
</el-form>
<div class="row-actions">
<el-button type="primary" @click="saveProxy">保存代理配置</el-button>
<el-button @click="onTestProxy">测试代理</el-button>
</div>
</el-card>
<el-card shadow="never" :body-style="{ padding: '16px' }" class="card section-card">
<h3 class="section-title">注册设置</h3>
<div class="section-sub app-muted">控制注册节流与新用户赠送 VIP</div>
<el-form label-width="122px">
<el-form-item label="注册赠送 VIP">
<el-switch v-model="autoApproveEnabled" />
<div class="help">开启后新用户注册成功自动赠送下方设定的 VIP 天数</div>
</el-form-item>
<el-form-item label="每小时注册限制">
<el-input-number v-model="autoApproveHourlyLimit" :min="1" :max="10000" />
</el-form-item>
<el-form-item label="赠送 VIP 天数">
<el-input-number v-model="autoApproveVipDays" :min="0" :max="999999" />
</el-form-item>
</el-form>
<div class="row-actions">
<el-button type="primary" @click="saveAutoApprove">保存注册设置</el-button>
</div>
</el-card>
</div>
<el-card shadow="never" :body-style="{ padding: '16px' }" class="card kdocs-card">
<div class="section-head">
<h3 class="section-title">金山文档上传</h3>
<div class="status-inline app-muted">
<span>登录状态</span>
<span class="status-chip" :class="kdocsStatusClass">
{{ kdocsStatusText }}
<span v-if="kdocsDetecting" class="status-dots" aria-hidden="true">
<i></i><i></i><i></i>
</span>
</span>
<span>· 待上传 {{ kdocsStatus.queue_size || 0 }}</span>
</div>
</div> </div>
</div>
<el-form label-width="118px" class="kdocs-form"> <el-card shadow="never" :body-style="{ padding: '16px' }" class="card">
<h3 class="section-title">系统并发配置</h3>
<el-form label-width="130px">
<el-form-item label="全局最大并发数">
<el-input-number v-model="maxConcurrentGlobal" :min="1" :max="200" />
<div class="help">同时最多运行的账号数量浏览任务使用 API 方式资源占用较低</div>
</el-form-item>
<el-form-item label="单账号最大并发数">
<el-input-number v-model="maxConcurrentPerAccount" :min="1" :max="50" />
<div class="help">单个账号同时最多运行的任务数量建议设为 1</div>
</el-form-item>
<el-form-item label="截图最大并发数">
<el-input-number v-model="maxScreenshotConcurrent" :min="1" :max="50" />
<div class="help">同时进行截图的最大数量wkhtmltoimage 资源占用较低可按需提高</div>
</el-form-item>
</el-form>
<el-button type="primary" @click="saveConcurrency">保存并发配置</el-button>
</el-card>
<el-card shadow="never" :body-style="{ padding: '16px' }" class="card">
<h3 class="section-title">定时任务配置</h3>
<el-form label-width="130px">
<el-form-item label="启用定时任务">
<el-switch v-model="scheduleEnabled" />
<div class="help">开启后系统会按计划自动执行浏览任务</div>
</el-form-item>
<el-form-item v-if="scheduleEnabled" label="执行时间">
<el-time-picker v-model="scheduleTime" value-format="HH:mm" format="HH:mm" />
</el-form-item>
<el-form-item v-if="scheduleEnabled" label="浏览类型">
<el-select v-model="scheduleBrowseType" style="width: 220px">
<el-option label="注册前未读" value="注册前未读" />
<el-option label="应读" value="应读" />
</el-select>
</el-form-item>
<el-form-item v-if="scheduleEnabled" label="执行日期">
<el-checkbox-group v-model="scheduleWeekdays">
<el-checkbox v-for="w in weekdaysOptions" :key="w.value" :label="w.value">
{{ w.label }}
</el-checkbox>
</el-checkbox-group>
</el-form-item>
<el-form-item v-if="scheduleEnabled" label="定时任务截图">
<el-switch v-model="scheduleScreenshotEnabled" />
<div class="help">开启后定时任务执行时会生成截图</div>
</el-form-item>
</el-form>
<div class="row-actions">
<el-button type="primary" @click="saveSchedule">保存定时任务配置</el-button>
<el-button type="success" plain @click="runScheduleNow">立即执行</el-button>
</div>
</el-card>
<el-card shadow="never" :body-style="{ padding: '16px' }" class="card">
<h3 class="section-title">代理设置</h3>
<el-form label-width="130px">
<el-form-item label="启用IP代理">
<el-switch v-model="proxyEnabled" />
<div class="help">开启后所有浏览任务将通过代理IP访问失败自动重试3次</div>
</el-form-item>
<el-form-item label="代理API地址">
<el-input v-model="proxyApiUrl" placeholder="http://api.xxx/Tools/IP.ashx?..." />
<div class="help">API 应返回IP:PORT例如 123.45.67.89:8888</div>
</el-form-item>
<el-form-item label="代理有效期(分钟)">
<el-input-number v-model="proxyExpireMinutes" :min="1" :max="60" />
</el-form-item>
</el-form>
<div class="row-actions">
<el-button type="primary" @click="saveProxy">保存代理配置</el-button>
<el-button @click="onTestProxy">测试代理</el-button>
</div>
</el-card>
<el-card shadow="never" :body-style="{ padding: '16px' }" class="card">
<h3 class="section-title">注册设置</h3>
<el-form label-width="130px">
<el-form-item label="注册赠送VIP">
<el-switch v-model="autoApproveEnabled" />
<div class="help">开启后新用户注册成功后将赠送下方设置的VIP天数注册已默认无需审核</div>
</el-form-item>
<el-form-item label="每小时注册限制">
<el-input-number v-model="autoApproveHourlyLimit" :min="1" :max="10000" />
</el-form-item>
<el-form-item label="注册赠送VIP天数">
<el-input-number v-model="autoApproveVipDays" :min="0" :max="999999" />
</el-form-item>
</el-form>
<el-button type="primary" @click="saveAutoApprove">保存注册设置</el-button>
</el-card>
<el-card shadow="never" :body-style="{ padding: '16px' }" class="card">
<h3 class="section-title">金山文档上传</h3>
<el-form label-width="130px">
<el-form-item label="启用上传"> <el-form-item label="启用上传">
<el-switch v-model="kdocsEnabled" /> <el-switch v-model="kdocsEnabled" />
<div class="help">表格结构变化时可先关闭避免错误上传</div> <div class="help">表格结构变化时可先关闭避免错误上传</div>
@@ -490,29 +548,21 @@ onMounted(loadAll)
<el-input v-model="kdocsDefaultUnit" placeholder="如:道县(用户可覆盖)" /> <el-input v-model="kdocsDefaultUnit" placeholder="如:道县(用户可覆盖)" />
</el-form-item> </el-form-item>
<el-form-item label="Sheet 名称"> <el-form-item label="Sheet名称">
<el-input v-model="kdocsSheetName" placeholder="留空使用第一个 Sheet" /> <el-input v-model="kdocsSheetName" placeholder="留空使用第一个Sheet" />
</el-form-item> </el-form-item>
<el-form-item label="Sheet 序号"> <el-form-item label="Sheet序号">
<el-input-number v-model="kdocsSheetIndex" :min="0" :max="50" /> <el-input-number v-model="kdocsSheetIndex" :min="0" :max="50" />
<div class="help">0 表示第一个 Sheet</div> <div class="help">0 表示第一个Sheet</div>
</el-form-item> </el-form-item>
<el-form-item label="列配置"> <el-form-item label="县区列">
<div class="kdocs-inline"> <el-input v-model="kdocsUnitColumn" placeholder="A" style="max-width: 120px" />
<el-input v-model="kdocsUnitColumn" placeholder="县区列,如 A" />
<el-input v-model="kdocsImageColumn" placeholder="图片列,如 D" />
</div>
</el-form-item> </el-form-item>
<el-form-item label="有效行范围"> <el-form-item label="图片列">
<div class="kdocs-range"> <el-input v-model="kdocsImageColumn" placeholder="D" style="max-width: 120px" />
<el-input-number v-model="kdocsRowStart" :min="0" :max="10000" placeholder="起始行" style="width: 140px" />
<span class="app-muted"></span>
<el-input-number v-model="kdocsRowEnd" :min="0" :max="10000" placeholder="结束行" style="width: 140px" />
</div>
<div class="help">用于限制上传区间 50-1000 表示不限制</div>
</el-form-item> </el-form-item>
<el-form-item label="管理员通知"> <el-form-item label="管理员通知">
@@ -526,6 +576,13 @@ onMounted(loadAll)
<div class="row-actions"> <div class="row-actions">
<el-button type="primary" @click="saveKdocsConfig">保存表格配置</el-button> <el-button type="primary" @click="saveKdocsConfig">保存表格配置</el-button>
<el-button
:loading="kdocsStatusLoading"
:disabled="kdocsActionBusy && !kdocsStatusLoading"
@click="refreshKdocsStatus"
>
刷新状态
</el-button>
<el-button <el-button
type="success" type="success"
plain plain
@@ -546,7 +603,14 @@ onMounted(loadAll)
</el-button> </el-button>
</div> </div>
<div v-if="kdocsStatus.last_error" class="help">最近错误{{ kdocsStatus.last_error }}</div> <div class="help">
登录状态
<span v-if="kdocsStatus.last_login_ok === true">已登录</span>
<span v-else-if="kdocsStatus.login_required">需要扫码</span>
<span v-else>未知</span>
· 待上传 {{ kdocsStatus.queue_size || 0 }}
<span v-if="kdocsStatus.last_error">· 最近错误{{ kdocsStatus.last_error }}</span>
</div>
<div v-if="kdocsActionHint" class="help">操作提示{{ kdocsActionHint }}</div> <div v-if="kdocsActionHint" class="help">操作提示{{ kdocsActionHint }}</div>
</el-card> </el-card>
@@ -563,150 +627,18 @@ onMounted(loadAll)
.page-stack { .page-stack {
display: flex; display: flex;
flex-direction: column; flex-direction: column;
gap: 14px; gap: 12px;
min-width: 0;
}
.config-grid {
display: grid;
grid-template-columns: repeat(3, minmax(0, 1fr));
gap: 14px;
} }
.card { .card {
border-radius: var(--app-radius); border-radius: var(--app-radius);
border: 1px solid var(--app-border); border: 1px solid var(--app-border);
background: var(--app-card-bg);
box-shadow: var(--app-shadow-soft);
}
.section-card {
min-width: 0;
} }
.section-title { .section-title {
margin: 0; margin: 0 0 12px;
font-size: 15px; font-size: 14px;
font-weight: 800; font-weight: 800;
letter-spacing: 0.2px;
}
.section-sub {
margin-top: 6px;
margin-bottom: 10px;
font-size: 12px;
}
.section-head {
display: flex;
align-items: flex-start;
justify-content: space-between;
gap: 12px;
flex-wrap: wrap;
margin-bottom: 10px;
}
.status-inline {
font-size: 12px;
display: inline-flex;
align-items: center;
gap: 6px;
}
.status-chip {
display: inline-flex;
align-items: center;
min-height: 22px;
padding: 0 8px;
border-radius: 999px;
font-size: 12px;
font-weight: 700;
border: 1px solid transparent;
}
.status-chip.is-checking {
color: #1d4ed8;
background: #dbeafe;
border-color: #93c5fd;
}
.status-chip.is-online {
color: #065f46;
background: #d1fae5;
border-color: #6ee7b7;
}
.status-chip.is-offline {
color: #92400e;
background: #fef3c7;
border-color: #fcd34d;
}
.status-chip.is-error {
color: #991b1b;
background: #fee2e2;
border-color: #fca5a5;
}
.status-chip.is-unknown {
color: #374151;
background: #f3f4f6;
border-color: #d1d5db;
}
.status-dots {
display: inline-flex;
align-items: center;
gap: 3px;
margin-left: 3px;
}
.status-dots i {
width: 4px;
height: 4px;
border-radius: 50%;
background: currentColor;
opacity: 0.25;
animation: dotPulse 1.2s infinite ease-in-out;
}
.status-dots i:nth-child(2) {
animation-delay: 0.2s;
}
.status-dots i:nth-child(3) {
animation-delay: 0.4s;
}
@keyframes dotPulse {
0%,
80%,
100% {
opacity: 0.25;
transform: translateY(0);
}
40% {
opacity: 1;
transform: translateY(-1px);
}
}
.kdocs-form {
margin-top: 6px;
}
.kdocs-inline {
display: grid;
grid-template-columns: repeat(2, minmax(0, 1fr));
gap: 10px;
width: 100%;
}
.kdocs-range {
display: flex;
align-items: center;
gap: 8px;
flex-wrap: wrap;
} }
.kdocs-qr { .kdocs-qr {
@@ -736,24 +668,4 @@ onMounted(loadAll)
flex-wrap: wrap; flex-wrap: wrap;
gap: 10px; gap: 10px;
} }
@media (max-width: 1200px) {
.config-grid {
grid-template-columns: repeat(2, minmax(0, 1fr));
}
}
@media (max-width: 768px) {
.config-grid {
grid-template-columns: 1fr;
}
.kdocs-inline {
grid-template-columns: 1fr;
}
.kdocs-range {
align-items: stretch;
}
}
</style> </style>

View File

@@ -72,7 +72,7 @@ async function onEnableUser(row) {
await approveUser(row.id) await approveUser(row.id)
ElMessage.success('用户已启用') ElMessage.success('用户已启用')
await loadUsers() await loadUsers()
await refreshStats?.({ force: true }) await refreshStats?.()
} catch { } catch {
// handled by interceptor // handled by interceptor
} }
@@ -93,7 +93,7 @@ async function onDisableUser(row) {
await rejectUser(row.id) await rejectUser(row.id)
ElMessage.success('用户已禁用') ElMessage.success('用户已禁用')
await loadUsers() await loadUsers()
await refreshStats?.({ force: true }) await refreshStats?.()
} catch { } catch {
// handled by interceptor // handled by interceptor
} }
@@ -114,7 +114,7 @@ async function onDelete(row) {
await deleteUser(row.id) await deleteUser(row.id)
ElMessage.success('用户已删除') ElMessage.success('用户已删除')
await loadUsers() await loadUsers()
await refreshStats?.({ force: true }) await refreshStats?.()
} catch { } catch {
// handled by interceptor // handled by interceptor
} }
@@ -136,7 +136,7 @@ async function onSetVip(row, days) {
const res = await setUserVip(row.id, days) const res = await setUserVip(row.id, days)
ElMessage.success(res?.message || 'VIP设置成功') ElMessage.success(res?.message || 'VIP设置成功')
await loadUsers() await loadUsers()
await refreshStats?.({ force: true }) await refreshStats?.()
} catch { } catch {
// handled by interceptor // handled by interceptor
} }
@@ -157,7 +157,7 @@ async function onRemoveVip(row) {
const res = await removeUserVip(row.id) const res = await removeUserVip(row.id)
ElMessage.success(res?.message || 'VIP已移除') ElMessage.success(res?.message || 'VIP已移除')
await loadUsers() await loadUsers()
await refreshStats?.({ force: true }) await refreshStats?.()
} catch { } catch {
// handled by interceptor // handled by interceptor
} }
@@ -210,6 +210,9 @@ onMounted(refreshAll)
<div class="page-stack"> <div class="page-stack">
<div class="app-page-title"> <div class="app-page-title">
<h2>用户</h2> <h2>用户</h2>
<div>
<el-button @click="refreshAll">刷新</el-button>
</div>
</div> </div>
<el-card shadow="never" :body-style="{ padding: '16px' }" class="card"> <el-card shadow="never" :body-style="{ padding: '16px' }" class="card">
@@ -282,22 +285,18 @@ onMounted(refreshAll)
.page-stack { .page-stack {
display: flex; display: flex;
flex-direction: column; flex-direction: column;
gap: 14px; gap: 12px;
min-width: 0;
} }
.card { .card {
border-radius: var(--app-radius); border-radius: var(--app-radius);
border: 1px solid var(--app-border); border: 1px solid var(--app-border);
background: var(--app-card-bg);
box-shadow: var(--app-shadow-soft);
} }
.section-title { .section-title {
margin: 0 0 12px; margin: 0 0 12px;
font-size: 15px; font-size: 14px;
font-weight: 800; font-weight: 800;
letter-spacing: 0.2px;
} }
.help { .help {
@@ -307,9 +306,6 @@ onMounted(refreshAll)
.table-wrap { .table-wrap {
overflow-x: auto; overflow-x: auto;
border-radius: 10px;
border: 1px solid var(--app-border);
background: #fff;
} }
.user-block { .user-block {

View File

@@ -1,14 +1,10 @@
:root { :root {
--app-bg: #f4f6fb; --app-bg: #f6f7fb;
--app-text: #111827; --app-text: #111827;
--app-muted: #6b7280; --app-muted: #6b7280;
--app-border: rgba(15, 23, 42, 0.1); --app-border: rgba(17, 24, 39, 0.08);
--app-border-strong: rgba(15, 23, 42, 0.14);
--app-radius: 12px; --app-radius: 12px;
--app-radius-lg: 14px; --app-shadow: 0 8px 24px rgba(17, 24, 39, 0.06);
--app-shadow-soft: 0 8px 24px rgba(15, 23, 42, 0.05);
--app-shadow: 0 12px 30px rgba(15, 23, 42, 0.08);
--app-card-bg: rgba(255, 255, 255, 0.94);
font-family: ui-sans-serif, system-ui, -apple-system, BlinkMacSystemFont, 'Segoe UI', font-family: ui-sans-serif, system-ui, -apple-system, BlinkMacSystemFont, 'Segoe UI',
'PingFang SC', 'Hiragino Sans GB', 'Microsoft YaHei', Arial, sans-serif; 'PingFang SC', 'Hiragino Sans GB', 'Microsoft YaHei', Arial, sans-serif;
@@ -24,17 +20,10 @@ body,
height: 100%; height: 100%;
} }
* {
box-sizing: border-box;
}
body { body {
margin: 0; margin: 0;
background: var(--app-bg);
color: var(--app-text); color: var(--app-text);
background:
radial-gradient(1200px 500px at -10% -10%, rgba(59, 130, 246, 0.12), transparent 55%),
radial-gradient(1000px 420px at 110% 0%, rgba(139, 92, 246, 0.1), transparent 50%),
var(--app-bg);
} }
a { a {
@@ -47,13 +36,13 @@ a {
align-items: center; align-items: center;
justify-content: space-between; justify-content: space-between;
gap: 12px; gap: 12px;
margin: 0 0 14px; margin: 0 0 12px;
} }
.app-page-title h2 { .app-page-title h2 {
margin: 0; margin: 0;
font-size: 19px; font-size: 18px;
font-weight: 800; font-weight: 700;
letter-spacing: 0.2px; letter-spacing: 0.2px;
} }
@@ -61,72 +50,12 @@ a {
color: var(--app-muted); color: var(--app-muted);
} }
.page-stack {
display: flex;
flex-direction: column;
gap: 14px;
min-width: 0;
}
.el-card {
border-radius: var(--app-radius-lg);
border: 1px solid var(--app-border);
background: var(--app-card-bg);
box-shadow: var(--app-shadow-soft);
}
.el-button {
border-radius: 10px;
font-weight: 600;
}
.el-input__wrapper,
.el-textarea__inner,
.el-select__wrapper,
.el-input-number,
.el-picker__wrapper {
border-radius: 10px;
}
.el-table {
border-radius: 10px;
overflow: hidden;
}
.el-table th.el-table__cell {
background: #f8fafc;
color: #334155;
font-weight: 700;
}
.el-table td.el-table__cell,
.el-table th.el-table__cell {
padding-top: 11px;
padding-bottom: 11px;
}
.el-table .el-table__row:hover > td.el-table__cell {
background: #f8fbff;
}
.el-tag {
border-radius: 999px;
}
.el-dialog {
border-radius: var(--app-radius-lg);
}
@media (max-width: 768px) { @media (max-width: 768px) {
.app-page-title { .app-page-title {
flex-wrap: wrap; flex-wrap: wrap;
align-items: flex-start; align-items: flex-start;
} }
.app-page-title h2 {
font-size: 17px;
}
.el-dialog { .el-dialog {
max-width: 92vw; max-width: 92vw;
} }
@@ -149,121 +78,3 @@ a {
width: 100%; width: 100%;
} }
} }
.section-head {
display: flex;
align-items: center;
justify-content: space-between;
gap: 12px;
flex-wrap: wrap;
}
.section-title {
margin: 0;
font-size: 15px;
font-weight: 800;
letter-spacing: 0.2px;
}
.toolbar {
display: flex;
align-items: center;
gap: 10px;
flex-wrap: wrap;
}
.table-wrap {
overflow-x: auto;
border-radius: 10px;
border: 1px solid var(--app-border);
background: #fff;
}
.pagination {
display: flex;
align-items: center;
justify-content: space-between;
gap: 10px;
margin-top: 14px;
flex-wrap: wrap;
}
.page-hint {
font-size: 12px;
}
.el-tabs__item {
font-weight: 700;
}
.el-form-item {
margin-bottom: 18px;
}
@media (max-width: 768px) {
.pagination {
justify-content: flex-start;
}
}
@media (max-width: 900px) {
.toolbar {
width: 100%;
}
.toolbar > * {
min-width: 0;
}
}
@media (max-width: 768px) {
.app-page-title > div {
width: 100%;
}
.app-page-title .toolbar {
width: 100%;
}
.toolbar > * {
flex: 1 1 calc(50% - 6px);
}
.toolbar .el-button,
.toolbar .el-select,
.toolbar .el-input,
.toolbar .el-input-number {
width: 100% !important;
}
.section-head {
align-items: flex-start;
}
.section-head > * {
width: 100%;
}
.table-wrap {
-webkit-overflow-scrolling: touch;
}
.table-wrap .el-table {
min-width: 700px;
}
.el-pagination {
width: 100%;
justify-content: flex-start;
}
}
@media (max-width: 520px) {
.toolbar > * {
flex-basis: 100%;
}
.table-wrap .el-table {
min-width: 620px;
}
}

View File

@@ -1,121 +0,0 @@
import { fetchKdocsStatus } from '../api/kdocs'
const CACHE_KEY = 'admin:kdocs:status:v1'
const DEFAULT_MAX_AGE_MS = 5 * 60 * 1000
let memoryStatus = null
let memoryUpdatedAt = 0
let inflightPromise = null
function nowTs() {
return Date.now()
}
function normalizeStatus(raw) {
if (!raw || typeof raw !== 'object') return {}
return raw
}
function readSessionCache() {
try {
const raw = window.sessionStorage.getItem(CACHE_KEY)
if (!raw) return null
const parsed = JSON.parse(raw)
if (!parsed || typeof parsed !== 'object') return null
const updatedAt = Number(parsed.updated_at || 0)
const status = normalizeStatus(parsed.status)
if (!updatedAt) return null
return { status, updatedAt }
} catch {
return null
}
}
function writeSessionCache(status, updatedAt) {
try {
window.sessionStorage.setItem(
CACHE_KEY,
JSON.stringify({
status: normalizeStatus(status),
updated_at: Number(updatedAt || nowTs()),
}),
)
} catch {
// ignore
}
}
function hydrateFromSessionIfNeeded() {
if (memoryStatus !== null) return
const cached = readSessionCache()
if (!cached) return
memoryStatus = cached.status
memoryUpdatedAt = cached.updatedAt
}
function commitStatus(status) {
memoryStatus = normalizeStatus(status)
memoryUpdatedAt = nowTs()
writeSessionCache(memoryStatus, memoryUpdatedAt)
return memoryStatus
}
function isFresh(maxAgeMs) {
if (memoryStatus === null || !memoryUpdatedAt) return false
const ageLimit = Number(maxAgeMs)
if (!Number.isFinite(ageLimit) || ageLimit < 0) return true
return nowTs() - memoryUpdatedAt <= ageLimit
}
export function getCachedKdocsStatus(options = {}) {
hydrateFromSessionIfNeeded()
const maxAgeMs = options.maxAgeMs ?? DEFAULT_MAX_AGE_MS
if (!isFresh(maxAgeMs)) return null
return normalizeStatus(memoryStatus)
}
export function updateCachedKdocsStatus(status) {
return commitStatus(status)
}
export function clearCachedKdocsStatus() {
memoryStatus = null
memoryUpdatedAt = 0
inflightPromise = null
try {
window.sessionStorage.removeItem(CACHE_KEY)
} catch {
// ignore
}
}
export async function preloadKdocsStatus(options = {}) {
const {
force = false,
maxAgeMs = DEFAULT_MAX_AGE_MS,
silent = true,
live = 0,
} = options
if (!force) {
const cached = getCachedKdocsStatus({ maxAgeMs })
if (cached) return cached
}
if (inflightPromise) return inflightPromise
const params = live ? { live: 1 } : {}
const requestConfig = {
__silent: Boolean(silent),
__no_retry: true,
timeout: 8000,
}
inflightPromise = fetchKdocsStatus(params, requestConfig)
.then((status) => commitStatus(status || {}))
.finally(() => {
inflightPromise = null
})
return inflightPromise
}

View File

@@ -1,130 +0,0 @@
function ensurePublicKeyOptions(options) {
if (!options || typeof options !== 'object') {
throw new Error('Passkey参数无效')
}
return options.publicKey && typeof options.publicKey === 'object' ? options.publicKey : options
}
function base64UrlToUint8Array(base64url) {
const value = String(base64url || '')
const padding = '='.repeat((4 - (value.length % 4)) % 4)
const base64 = (value + padding).replace(/-/g, '+').replace(/_/g, '/')
const raw = window.atob(base64)
const bytes = new Uint8Array(raw.length)
for (let i = 0; i < raw.length; i += 1) {
bytes[i] = raw.charCodeAt(i)
}
return bytes
}
function uint8ArrayToBase64Url(input) {
const bytes = input instanceof ArrayBuffer ? new Uint8Array(input) : new Uint8Array(input || [])
let binary = ''
for (let i = 0; i < bytes.length; i += 1) {
binary += String.fromCharCode(bytes[i])
}
return window
.btoa(binary)
.replace(/\+/g, '-')
.replace(/\//g, '_')
.replace(/=+$/g, '')
}
function toCreationOptions(rawOptions) {
const options = ensurePublicKeyOptions(rawOptions)
const normalized = {
...options,
challenge: base64UrlToUint8Array(options.challenge),
user: {
...options.user,
id: base64UrlToUint8Array(options.user?.id),
},
}
if (Array.isArray(options.excludeCredentials)) {
normalized.excludeCredentials = options.excludeCredentials.map((item) => ({
...item,
id: base64UrlToUint8Array(item.id),
}))
}
return normalized
}
function serializeCredential(credential) {
if (!credential) return null
const response = credential.response || {}
const output = {
id: credential.id,
rawId: uint8ArrayToBase64Url(credential.rawId),
type: credential.type,
authenticatorAttachment: credential.authenticatorAttachment || undefined,
response: {},
}
if (response.clientDataJSON) {
output.response.clientDataJSON = uint8ArrayToBase64Url(response.clientDataJSON)
}
if (response.attestationObject) {
output.response.attestationObject = uint8ArrayToBase64Url(response.attestationObject)
}
if (response.authenticatorData) {
output.response.authenticatorData = uint8ArrayToBase64Url(response.authenticatorData)
}
if (response.signature) {
output.response.signature = uint8ArrayToBase64Url(response.signature)
}
if (response.userHandle) {
output.response.userHandle = uint8ArrayToBase64Url(response.userHandle)
} else {
output.response.userHandle = null
}
if (typeof response.getTransports === 'function') {
output.response.transports = response.getTransports() || []
}
return output
}
export function isPasskeyAvailable() {
return typeof window !== 'undefined' && window.isSecureContext && !!window.PublicKeyCredential && !!navigator.credentials
}
function isMiuiBrowser() {
const ua = String(window?.navigator?.userAgent || '')
return /MiuiBrowser|XiaoMi\/MiuiBrowser/i.test(ua)
}
export function getPasskeyClientErrorMessage(error, actionLabel = 'Passkey操作') {
const name = String(error?.name || '').trim()
const message = String(error?.message || '').trim()
if (name === 'NotAllowedError') {
return `${actionLabel}未完成(可能已取消、超时或设备未响应)`
}
if (name === 'NotReadableError') {
if (/credential manager/i.test(message) && isMiuiBrowser()) {
return '当前小米浏览器与系统凭据管理器兼容性较差,请改用系统 Chrome 或 Edge 后重试。'
}
if (/credential manager/i.test(message)) {
return '系统凭据管理器返回异常,请确认已设置系统锁屏并改用系统 Chrome/Edge 后重试。'
}
return message || `${actionLabel}失败(设备读取异常)`
}
if (name === 'SecurityError') {
return '当前环境安全策略不满足 Passkey 要求,请确认使用 HTTPS 且证书有效。'
}
return message || `${actionLabel}失败`
}
export async function createPasskey(rawOptions) {
const publicKey = toCreationOptions(rawOptions)
const credential = await navigator.credentials.create({ publicKey })
return serializeCredential(credential)
}

View File

@@ -8,25 +8,5 @@ export default defineConfig({
outDir: '../static/admin', outDir: '../static/admin',
emptyOutDir: true, emptyOutDir: true,
manifest: true, manifest: true,
cssCodeSplit: true,
chunkSizeWarningLimit: 800,
rollupOptions: {
output: {
manualChunks(id) {
if (!id.includes('node_modules')) return undefined
if (id.includes('/node_modules/vue/') || id.includes('/node_modules/@vue/') || id.includes('/node_modules/vue-router/')) {
return 'vendor-vue'
}
if (id.includes('/node_modules/element-plus/') || id.includes('/node_modules/@element-plus/')) {
return 'vendor-element'
}
if (id.includes('/node_modules/axios/')) {
return 'vendor-axios'
}
return 'vendor-misc'
},
},
},
}, },
}) })

View File

@@ -15,78 +15,14 @@ import weakref
from typing import Optional, Callable from typing import Optional, Callable
from dataclasses import dataclass from dataclasses import dataclass
from urllib.parse import urlsplit from urllib.parse import urlsplit
import threading
from app_config import get_config from app_config import get_config
import time as _time_module import time as _time_module
_MODULE_START_TIME = _time_module.time() _MODULE_START_TIME = _time_module.time()
_WARMUP_PERIOD_SECONDS = 60 # 启动后 60 秒内使用更长超时 _WARMUP_PERIOD_SECONDS = 60 # 启动后 60 秒内使用更长超时
_WARMUP_TIMEOUT_SECONDS = 15.0 # 预热期间的超时时间 _WARMUP_TIMEOUT_SECONDS = 15.0 # 预热期间的超时时间
# HTML解析缓存类
class HTMLParseCache:
"""HTML解析结果缓存"""
def __init__(self, ttl: int = 300, maxsize: int = 1000):
self.cache = {}
self.ttl = ttl
self.maxsize = maxsize
self._access_times = {}
self._lock = threading.RLock()
def _make_key(self, url: str, content_hash: str) -> str:
return f"{url}:{content_hash}"
def get(self, key: str) -> Optional[tuple]:
"""获取缓存,如果存在且未过期"""
with self._lock:
if key in self.cache:
value, timestamp = self.cache[key]
if time.time() - timestamp < self.ttl:
self._access_times[key] = time.time()
return value
else:
# 过期删除
del self.cache[key]
del self._access_times[key]
return None
def set(self, key: str, value: tuple):
"""设置缓存"""
with self._lock:
# 如果缓存已满,删除最久未访问的项
if len(self.cache) >= self.maxsize:
if self._access_times:
# 使用简单的LRU策略删除最久未访问的项
oldest_key = None
oldest_time = float("inf")
for key, access_time in self._access_times.items():
if access_time < oldest_time:
oldest_time = access_time
oldest_key = key
if oldest_key:
del self.cache[oldest_key]
del self._access_times[oldest_key]
self.cache[key] = (value, time.time())
self._access_times[key] = time.time()
def clear(self):
"""清空缓存"""
with self._lock:
self.cache.clear()
self._access_times.clear()
def get_lru_key(self) -> Optional[str]:
"""获取最久未访问的键"""
if not self._access_times:
return None
return min(self._access_times.keys(), key=lambda k: self._access_times[k])
config = get_config() config = get_config()
BASE_URL = getattr(config, "ZSGL_BASE_URL", "https://postoa.aidunsoft.com") BASE_URL = getattr(config, "ZSGL_BASE_URL", "https://postoa.aidunsoft.com")
@@ -95,9 +31,7 @@ INDEX_URL_PATTERN = getattr(config, "ZSGL_INDEX_URL_PATTERN", "index.aspx")
COOKIES_DIR = getattr(config, "COOKIES_DIR", "data/cookies") COOKIES_DIR = getattr(config, "COOKIES_DIR", "data/cookies")
try: try:
_API_REQUEST_TIMEOUT_SECONDS = float( _API_REQUEST_TIMEOUT_SECONDS = float(os.environ.get("API_REQUEST_TIMEOUT_SECONDS") or os.environ.get("API_REQUEST_TIMEOUT") or "5")
os.environ.get("API_REQUEST_TIMEOUT_SECONDS") or os.environ.get("API_REQUEST_TIMEOUT") or "5"
)
except Exception: except Exception:
_API_REQUEST_TIMEOUT_SECONDS = 5.0 _API_REQUEST_TIMEOUT_SECONDS = 5.0
_API_REQUEST_TIMEOUT_SECONDS = max(3.0, _API_REQUEST_TIMEOUT_SECONDS) _API_REQUEST_TIMEOUT_SECONDS = max(3.0, _API_REQUEST_TIMEOUT_SECONDS)
@@ -117,11 +51,7 @@ def get_cookie_jar_path(username: str) -> str:
"""获取截图用的 cookies 文件路径Netscape Cookie 格式)""" """获取截图用的 cookies 文件路径Netscape Cookie 格式)"""
import hashlib import hashlib
os.makedirs(COOKIES_DIR, mode=0o700, exist_ok=True) os.makedirs(COOKIES_DIR, exist_ok=True)
try:
os.chmod(COOKIES_DIR, 0o700)
except Exception:
pass
filename = hashlib.sha256(username.encode()).hexdigest()[:32] + ".cookies.txt" filename = hashlib.sha256(username.encode()).hexdigest()[:32] + ".cookies.txt"
return os.path.join(COOKIES_DIR, filename) return os.path.join(COOKIES_DIR, filename)
@@ -136,7 +66,6 @@ def is_cookie_jar_fresh(cookie_path: str, max_age_seconds: int = _COOKIE_JAR_MAX
except Exception: except Exception:
return False return False
_api_browser_instances: "weakref.WeakSet[APIBrowser]" = weakref.WeakSet() _api_browser_instances: "weakref.WeakSet[APIBrowser]" = weakref.WeakSet()
@@ -155,7 +84,6 @@ atexit.register(_cleanup_api_browser_instances)
@dataclass @dataclass
class APIBrowseResult: class APIBrowseResult:
"""API 浏览结果""" """API 浏览结果"""
success: bool success: bool
total_items: int = 0 total_items: int = 0
total_attachments: int = 0 total_attachments: int = 0
@@ -167,73 +95,34 @@ class APIBrowser:
def __init__(self, log_callback: Optional[Callable] = None, proxy_config: Optional[dict] = None): def __init__(self, log_callback: Optional[Callable] = None, proxy_config: Optional[dict] = None):
self.session = requests.Session() self.session = requests.Session()
self.session.headers.update( self.session.headers.update({
{ 'User-Agent': 'Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/120.0.0.0 Safari/537.36',
"User-Agent": "Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/120.0.0.0 Safari/537.36", 'Accept': 'text/html,application/xhtml+xml,application/xml;q=0.9,image/webp,*/*;q=0.8',
"Accept": "text/html,application/xhtml+xml,application/xml;q=0.9,image/webp,*/*;q=0.8", 'Accept-Language': 'zh-CN,zh;q=0.9,en;q=0.8',
"Accept-Language": "zh-CN,zh;q=0.9,en;q=0.8", })
}
)
self.logged_in = False self.logged_in = False
self.log_callback = log_callback self.log_callback = log_callback
self.stop_flag = False self.stop_flag = False
self._closed = False # 防止重复关闭 self._closed = False # 防止重复关闭
self.last_total_records = 0 self.last_total_records = 0
# 初始化HTML解析缓存
self._parse_cache = HTMLParseCache(ttl=300, maxsize=500) # 5分钟缓存最多500条记录
# 设置代理 # 设置代理
if proxy_config and proxy_config.get("server"): if proxy_config and proxy_config.get("server"):
proxy_server = proxy_config["server"] proxy_server = proxy_config["server"]
self.session.proxies = {"http": proxy_server, "https": proxy_server} self.session.proxies = {
"http": proxy_server,
"https": proxy_server
}
self.proxy_server = proxy_server self.proxy_server = proxy_server
else: else:
self.proxy_server = None self.proxy_server = None
_api_browser_instances.add(self) _api_browser_instances.add(self)
def _calculate_adaptive_delay(self, iteration: int, consecutive_failures: int) -> float:
"""
智能延迟计算:文章处理延迟
根据迭代次数和连续失败次数动态调整延迟
"""
# 基础延迟,显著降低
base_delay = 0.03
# 如果有连续失败,增加延迟但有上限
if consecutive_failures > 0:
delay = base_delay * (1.5 ** min(consecutive_failures, 3))
return min(delay, 0.2) # 最多200ms
# 根据处理进度调整延迟,开始时较慢,后来可以更快
progress_factor = min(iteration / 100.0, 1.0) # 100个文章后达到最大优化
optimized_delay = base_delay * (1.2 - 0.4 * progress_factor) # 从120%逐渐降低到80%
return max(optimized_delay, 0.02) # 最少20ms
def _calculate_page_delay(self, current_page: int, new_articles_in_page: int) -> float:
"""
智能延迟计算:页面处理延迟
根据页面位置和新文章数量调整延迟
"""
base_delay = 0.08 # 基础延迟降低50%
# 如果当前页有大量新文章,可以稍微增加延迟
if new_articles_in_page > 10:
return base_delay * 1.2
# 如果是新页面,降低延迟(内容可能需要加载)
if current_page <= 3:
return base_delay * 1.1
# 后续页面可以更快
return base_delay * 0.8
def log(self, message: str): def log(self, message: str):
"""记录日志""" """记录日志"""
if self.log_callback: if self.log_callback:
self.log_callback(message) self.log_callback(message)
def save_cookies_for_screenshot(self, username: str): def save_cookies_for_screenshot(self, username: str):
"""保存 cookies 供 wkhtmltoimage 使用Netscape Cookie 格式)""" """保存 cookies 供 wkhtmltoimage 使用Netscape Cookie 格式)"""
cookies_path = get_cookie_jar_path(username) cookies_path = get_cookie_jar_path(username)
@@ -264,10 +153,6 @@ class APIBrowser:
with open(cookies_path, "w", encoding="utf-8") as f: with open(cookies_path, "w", encoding="utf-8") as f:
f.write("\n".join(lines) + "\n") f.write("\n".join(lines) + "\n")
try:
os.chmod(cookies_path, 0o600)
except Exception:
pass
self.log(f"[API] Cookies已保存供截图使用") self.log(f"[API] Cookies已保存供截图使用")
return True return True
@@ -275,13 +160,15 @@ class APIBrowser:
self.log(f"[API] 保存cookies失败: {e}") self.log(f"[API] 保存cookies失败: {e}")
return False return False
def _request_with_retry(self, method, url, max_retries=3, retry_delay=1, **kwargs): def _request_with_retry(self, method, url, max_retries=3, retry_delay=1, **kwargs):
"""带重试机制的请求方法""" """带重试机制的请求方法"""
# 启动后 60 秒内使用更长超时15秒之后使用配置的超时 # 启动后 60 秒内使用更长超时15秒之后使用配置的超时
if (_time_module.time() - _MODULE_START_TIME) < _WARMUP_PERIOD_SECONDS: if (_time_module.time() - _MODULE_START_TIME) < _WARMUP_PERIOD_SECONDS:
kwargs.setdefault("timeout", _WARMUP_TIMEOUT_SECONDS) kwargs.setdefault('timeout', _WARMUP_TIMEOUT_SECONDS)
else: else:
kwargs.setdefault("timeout", _API_REQUEST_TIMEOUT_SECONDS) kwargs.setdefault('timeout', _API_REQUEST_TIMEOUT_SECONDS)
last_error = None last_error = None
timeout_value = kwargs.get("timeout") timeout_value = kwargs.get("timeout")
diag_enabled = _API_DIAGNOSTIC_LOG diag_enabled = _API_DIAGNOSTIC_LOG
@@ -290,7 +177,7 @@ class APIBrowser:
for attempt in range(1, max_retries + 1): for attempt in range(1, max_retries + 1):
start_ts = _time_module.time() start_ts = _time_module.time()
try: try:
if method.lower() == "get": if method.lower() == 'get':
resp = self.session.get(url, **kwargs) resp = self.session.get(url, **kwargs)
else: else:
resp = self.session.post(url, **kwargs) resp = self.session.post(url, **kwargs)
@@ -311,7 +198,6 @@ class APIBrowser:
if attempt < max_retries: if attempt < max_retries:
self.log(f"[API] 请求超时,{retry_delay}秒后重试 ({attempt}/{max_retries})...") self.log(f"[API] 请求超时,{retry_delay}秒后重试 ({attempt}/{max_retries})...")
import time import time
time.sleep(retry_delay) time.sleep(retry_delay)
else: else:
self.log(f"[API] 请求失败,已重试{max_retries}次: {str(e)}") self.log(f"[API] 请求失败,已重试{max_retries}次: {str(e)}")
@@ -321,10 +207,10 @@ class APIBrowser:
def _get_aspnet_fields(self, soup): def _get_aspnet_fields(self, soup):
"""获取 ASP.NET 隐藏字段""" """获取 ASP.NET 隐藏字段"""
fields = {} fields = {}
for name in ["__VIEWSTATE", "__VIEWSTATEGENERATOR", "__EVENTVALIDATION"]: for name in ['__VIEWSTATE', '__VIEWSTATEGENERATOR', '__EVENTVALIDATION']:
field = soup.find("input", {"name": name}) field = soup.find('input', {'name': name})
if field: if field:
fields[name] = field.get("value", "") fields[name] = field.get('value', '')
return fields return fields
def get_real_name(self) -> Optional[str]: def get_real_name(self) -> Optional[str]:
@@ -338,18 +224,18 @@ class APIBrowser:
try: try:
url = f"{BASE_URL}/admin/center.aspx" url = f"{BASE_URL}/admin/center.aspx"
resp = self._request_with_retry("get", url) resp = self._request_with_retry('get', url)
soup = BeautifulSoup(resp.text, "html.parser") soup = BeautifulSoup(resp.text, 'html.parser')
# 查找包含"姓名:"的元素 # 查找包含"姓名:"的元素
# 页面格式: <li><p>姓名:喻勇祥(19174616018) 人力资源编码: ...</p></li> # 页面格式: <li><p>姓名:喻勇祥(19174616018) 人力资源编码: ...</p></li>
nlist = soup.find("div", {"class": "nlist-5"}) nlist = soup.find('div', {'class': 'nlist-5'})
if nlist: if nlist:
first_li = nlist.find("li") first_li = nlist.find('li')
if first_li: if first_li:
text = first_li.get_text() text = first_li.get_text()
# 解析姓名:格式为 "姓名XXX(手机号)" # 解析姓名:格式为 "姓名XXX(手机号)"
match = re.search(r"姓名[:]\s*([^\(]+)", text) match = re.search(r'姓名[:]\s*([^\(]+)', text)
if match: if match:
real_name = match.group(1).strip() real_name = match.group(1).strip()
if real_name: if real_name:
@@ -363,26 +249,26 @@ class APIBrowser:
self.log(f"[API] 登录: {username}") self.log(f"[API] 登录: {username}")
try: try:
resp = self._request_with_retry("get", LOGIN_URL) resp = self._request_with_retry('get', LOGIN_URL)
soup = BeautifulSoup(resp.text, "html.parser") soup = BeautifulSoup(resp.text, 'html.parser')
fields = self._get_aspnet_fields(soup) fields = self._get_aspnet_fields(soup)
data = fields.copy() data = fields.copy()
data["txtUserName"] = username data['txtUserName'] = username
data["txtPassword"] = password data['txtPassword'] = password
data["btnSubmit"] = "登 录" data['btnSubmit'] = '登 录'
resp = self._request_with_retry( resp = self._request_with_retry(
"post", 'post',
LOGIN_URL, LOGIN_URL,
data=data, data=data,
headers={ headers={
"Content-Type": "application/x-www-form-urlencoded", 'Content-Type': 'application/x-www-form-urlencoded',
"Origin": BASE_URL, 'Origin': BASE_URL,
"Referer": LOGIN_URL, 'Referer': LOGIN_URL,
}, },
allow_redirects=True, allow_redirects=True
) )
if INDEX_URL_PATTERN in resp.url: if INDEX_URL_PATTERN in resp.url:
@@ -390,9 +276,9 @@ class APIBrowser:
self.log(f"[API] 登录成功") self.log(f"[API] 登录成功")
return True return True
else: else:
soup = BeautifulSoup(resp.text, "html.parser") soup = BeautifulSoup(resp.text, 'html.parser')
error = soup.find(id="lblMsg") error = soup.find(id='lblMsg')
error_msg = error.get_text().strip() if error else "未知错误" error_msg = error.get_text().strip() if error else '未知错误'
self.log(f"[API] 登录失败: {error_msg}") self.log(f"[API] 登录失败: {error_msg}")
return False return False
@@ -406,57 +292,55 @@ class APIBrowser:
return [], 0, None return [], 0, None
if base_url and page > 1: if base_url and page > 1:
url = re.sub(r"page=\d+", f"page={page}", base_url) url = re.sub(r'page=\d+', f'page={page}', base_url)
elif page > 1: elif page > 1:
# 兼容兜底:若没有 next_url极少数情况下页面不提供“下一页”链接尝试直接拼 page 参数 # 兼容兜底:若没有 next_url极少数情况下页面不提供“下一页”链接尝试直接拼 page 参数
url = f"{BASE_URL}/admin/center.aspx?bz={bz}&page={page}" url = f"{BASE_URL}/admin/center.aspx?bz={bz}&page={page}"
else: else:
url = f"{BASE_URL}/admin/center.aspx?bz={bz}" url = f"{BASE_URL}/admin/center.aspx?bz={bz}"
resp = self._request_with_retry("get", url) resp = self._request_with_retry('get', url)
soup = BeautifulSoup(resp.text, "html.parser") soup = BeautifulSoup(resp.text, 'html.parser')
articles = [] articles = []
ltable = soup.find("table", {"class": "ltable"}) ltable = soup.find('table', {'class': 'ltable'})
if ltable: if ltable:
rows = ltable.find_all("tr")[1:] rows = ltable.find_all('tr')[1:]
for row in rows: for row in rows:
# 检查是否是"暂无记录" # 检查是否是"暂无记录"
if "暂无记录" in row.get_text(): if '暂无记录' in row.get_text():
continue continue
link = row.find("a", href=True) link = row.find('a', href=True)
if link: if link:
href = link.get("href", "") href = link.get('href', '')
title = link.get_text().strip() title = link.get_text().strip()
match = re.search(r"id=(\d+)", href) match = re.search(r'id=(\d+)', href)
article_id = match.group(1) if match else None article_id = match.group(1) if match else None
articles.append( articles.append({
{ 'title': title,
"title": title, 'href': href,
"href": href, 'article_id': article_id,
"article_id": article_id, })
}
)
# 获取总页数 # 获取总页数
total_pages = 1 total_pages = 1
next_page_url = None next_page_url = None
total_records = 0 total_records = 0
page_content = soup.find(id="PageContent") page_content = soup.find(id='PageContent')
if page_content: if page_content:
text = page_content.get_text() text = page_content.get_text()
total_match = re.search(r"共(\d+)记录", text) total_match = re.search(r'共(\d+)记录', text)
if total_match: if total_match:
total_records = int(total_match.group(1)) total_records = int(total_match.group(1))
total_pages = (total_records + 9) // 10 total_pages = (total_records + 9) // 10
next_link = page_content.find("a", string=re.compile("下一页")) next_link = page_content.find('a', string=re.compile('下一页'))
if next_link: if next_link:
next_href = next_link.get("href", "") next_href = next_link.get('href', '')
if next_href: if next_href:
next_page_url = f"{BASE_URL}/admin/{next_href}" next_page_url = f"{BASE_URL}/admin/{next_href}"
@@ -467,83 +351,43 @@ class APIBrowser:
return articles, total_pages, next_page_url return articles, total_pages, next_page_url
def get_article_attachments(self, article_href: str): def get_article_attachments(self, article_href: str):
"""获取文章的附件列表和文章信息""" """获取文章的附件列表"""
if not article_href.startswith("http"): if not article_href.startswith('http'):
url = f"{BASE_URL}/admin/{article_href}" url = f"{BASE_URL}/admin/{article_href}"
else: else:
url = article_href url = article_href
# 先检查缓存,避免不必要的请求 resp = self._request_with_retry('get', url)
# 使用URL作为缓存键简化版本 soup = BeautifulSoup(resp.text, 'html.parser')
cache_key = f"attachments_{hash(url)}"
cached_result = self._parse_cache.get(cache_key)
if cached_result:
return cached_result
resp = self._request_with_retry("get", url)
soup = BeautifulSoup(resp.text, "html.parser")
attachments = [] attachments = []
article_info = {"channel_id": None, "article_id": None}
# 从 saveread 按钮获取 channel_id 和 article_id attach_list = soup.find('div', {'class': 'attach-list2'})
for elem in soup.find_all(["button", "input"]):
onclick = elem.get("onclick", "")
match = re.search(r"saveread\((\d+),(\d+)\)", onclick)
if match:
article_info["channel_id"] = match.group(1)
article_info["article_id"] = match.group(2)
break
attach_list = soup.find("div", {"class": "attach-list2"})
if attach_list: if attach_list:
items = attach_list.find_all("li") items = attach_list.find_all('li')
for item in items: for item in items:
download_links = item.find_all("a", onclick=re.compile(r"download2?\.ashx")) download_links = item.find_all('a', onclick=re.compile(r'download\.ashx'))
for link in download_links: for link in download_links:
onclick = link.get("onclick", "") onclick = link.get('onclick', '')
id_match = re.search(r"id=(\d+)", onclick) id_match = re.search(r'id=(\d+)', onclick)
channel_match = re.search(r"channel_id=(\d+)", onclick) channel_match = re.search(r'channel_id=(\d+)', onclick)
if id_match: if id_match:
attach_id = id_match.group(1) attach_id = id_match.group(1)
channel_id = channel_match.group(1) if channel_match else "1" channel_id = channel_match.group(1) if channel_match else '1'
h3 = item.find("h3") h3 = item.find('h3')
filename = h3.get_text().strip() if h3 else f"附件{attach_id}" filename = h3.get_text().strip() if h3 else f'附件{attach_id}'
attachments.append({"id": attach_id, "channel_id": channel_id, "filename": filename}) attachments.append({
'id': attach_id,
'channel_id': channel_id,
'filename': filename
})
break break
result = (attachments, article_info) return attachments
# 存入缓存
self._parse_cache.set(cache_key, result)
return result
def mark_article_read(self, channel_id: str, article_id: str) -> bool: def mark_read(self, attach_id: str, channel_id: str = '1') -> bool:
"""通过 saveread API 标记文章已读""" """通过访问下载链接标记已读"""
if not channel_id or not article_id: download_url = f"{BASE_URL}/tools/download.ashx?site=main&id={attach_id}&channel_id={channel_id}"
return False
import random
saveread_url = (
f"{BASE_URL}/tools/submit_ajax.ashx?action=saveread&time={random.random()}&fl={channel_id}&id={article_id}"
)
try:
resp = self._request_with_retry("post", saveread_url)
# 检查响应是否成功
if resp.status_code == 200:
try:
data = resp.json()
return data.get("status") == 1
except:
return True # 如果不是 JSON 但状态码 200也认为成功
return False
except:
return False
def mark_read(self, attach_id: str, channel_id: str = "1") -> bool:
"""通过访问预览通道标记附件已读"""
download_url = f"{BASE_URL}/tools/download2.ashx?site=main&id={attach_id}&channel_id={channel_id}"
try: try:
resp = self._request_with_retry("get", download_url, stream=True) resp = self._request_with_retry("get", download_url, stream=True)
@@ -576,26 +420,28 @@ class APIBrowser:
return result return result
# 根据浏览类型确定 bz 参数 # 根据浏览类型确定 bz 参数
# 网站更新后参数: 0=应读, 1=已读(注册前未读需通过页面交互切换 # 网页实际参数: 0=注册前未读, 2=应读(历史上曾存在 1=已读,但当前逻辑不再使用
# 当前前端选项: 注册前未读、应读(默认应读) # 当前前端选项: 注册前未读、应读(默认应读)
browse_type_text = str(browse_type or "") browse_type_text = str(browse_type or "")
if "注册前" in browse_type_text: if '注册前' in browse_type_text:
bz = 0 # 注册前未读(暂与应读相同,网站通过页面状态区分) bz = 0 # 注册前未读
else: else:
bz = 0 # 应读 bz = 2 # 应读
self.log(f"[API] 开始浏览 '{browse_type}' (bz={bz})...") self.log(f"[API] 开始浏览 '{browse_type}' (bz={bz})...")
try: try:
total_items = 0 total_items = 0
total_attachments = 0 total_attachments = 0
page = 1
base_url = None
skipped_items = 0 skipped_items = 0
consecutive_failures = 0 consecutive_failures = 0
max_consecutive_failures = 3 max_consecutive_failures = 3
# 获取第一页,了解总记录数 # 获取第一页
try: try:
articles, total_pages, _ = self.get_article_list_page(bz, 1) articles, total_pages, next_url = self.get_article_list_page(bz, page)
consecutive_failures = 0 consecutive_failures = 0
except Exception as e: except Exception as e:
result.error_message = str(e) result.error_message = str(e)
@@ -607,9 +453,14 @@ class APIBrowser:
result.success = True result.success = True
return result return result
total_records = int(getattr(self, "last_total_records", 0) or 0) self.log(f"[API] 共 {total_pages} 页,开始处理...")
self.log(f"[API] 共 {total_records} 条记录,开始处理...")
if next_url:
base_url = next_url
elif total_pages > 1:
base_url = f"{BASE_URL}/admin/center.aspx?bz={bz}&page=2"
total_records = int(getattr(self, "last_total_records", 0) or 0)
last_report_ts = 0.0 last_report_ts = 0.0
def report_progress(force: bool = False): def report_progress(force: bool = False):
@@ -627,37 +478,31 @@ class APIBrowser:
report_progress(force=True) report_progress(force=True)
# 循环处理:遍历所有页面,跟踪已处理文章防止重复 # 处理所有页面
max_iterations = total_records + 20 # 防止无限循环 while page <= total_pages:
iteration = 0
processed_hrefs = set() # 跟踪已处理的文章,防止重复处理
current_page = 1
while articles and iteration < max_iterations:
iteration += 1
if should_stop_callback and should_stop_callback(): if should_stop_callback and should_stop_callback():
self.log("[API] 收到停止信号") self.log("[API] 收到停止信号")
break break
new_articles_in_page = 0 # 本次迭代中新处理的文章数 # page==1 已取过,后续页在这里获取
if page > 1:
try:
articles, _, next_url = self.get_article_list_page(bz, page, base_url)
consecutive_failures = 0
if next_url:
base_url = next_url
except Exception as e:
self.log(f"[API] 获取第{page}页列表失败,终止本次浏览: {str(e)}")
raise
for article in articles: for article in articles:
if should_stop_callback and should_stop_callback(): if should_stop_callback and should_stop_callback():
break break
article_href = article["href"] title = article['title'][:30]
# 跳过已处理的文章 # 获取附件(文章详情页)
if article_href in processed_hrefs:
continue
processed_hrefs.add(article_href)
new_articles_in_page += 1
title = article["title"][:30]
# 获取附件和文章信息(文章详情页)
try: try:
attachments, article_info = self.get_article_attachments(article_href) attachments = self.get_article_attachments(article['href'])
consecutive_failures = 0 consecutive_failures = 0
except Exception as e: except Exception as e:
skipped_items += 1 skipped_items += 1
@@ -672,52 +517,21 @@ class APIBrowser:
total_items += 1 total_items += 1
report_progress() report_progress()
# 标记文章已读(调用 saveread API
article_marked = False
if article_info.get("channel_id") and article_info.get("article_id"):
article_marked = self.mark_article_read(article_info["channel_id"], article_info["article_id"])
# 处理附件(如果有)
if attachments: if attachments:
for attach in attachments: for attach in attachments:
if self.mark_read(attach["id"], attach["channel_id"]): if self.mark_read(attach['id'], attach['channel_id']):
total_attachments += 1 total_attachments += 1
self.log(f"[API] [{total_items}] {title} - {len(attachments)}个附件") self.log(f"[API] [{total_items}] {title} - {len(attachments)}个附件")
else:
# 没有附件的文章,只记录标记状态
status = "已标记" if article_marked else "标记失败"
self.log(f"[API] [{total_items}] {title} - 无附件({status})")
# 智能延迟策略:根据连续失败次数和文章数量动态调整 time.sleep(0.1)
time.sleep(self._calculate_adaptive_delay(total_items, consecutive_failures))
time.sleep(self._calculate_page_delay(current_page, new_articles_in_page)) page += 1
time.sleep(0.2)
# 决定下一步获取哪一页
if new_articles_in_page > 0:
# 有新文章被处理重新获取第1页因为已读文章会从列表消失页面会上移
current_page = 1
else:
# 当前页没有新文章,尝试下一页
current_page += 1
if current_page > total_pages:
self.log(f"[API] 已遍历所有 {total_pages} 页,结束循环")
break
try:
articles, new_total_pages, _ = self.get_article_list_page(bz, current_page)
if new_total_pages > 0:
total_pages = new_total_pages
except Exception as e:
self.log(f"[API] 获取第{current_page}页列表失败: {str(e)}")
break
report_progress(force=True) report_progress(force=True)
if skipped_items: if skipped_items:
self.log( self.log(f"[API] 浏览完成: {total_items} 条内容,{total_attachments} 个附件(跳过 {skipped_items} 条内容)")
f"[API] 浏览完成: {total_items} 条内容,{total_attachments} 个附件(跳过 {skipped_items} 条内容)"
)
else: else:
self.log(f"[API] 浏览完成: {total_items} 条内容,{total_attachments} 个附件") self.log(f"[API] 浏览完成: {total_items} 条内容,{total_attachments} 个附件")
@@ -774,7 +588,7 @@ def warmup_api_connection(proxy_config: Optional[dict] = None, log_callback: Opt
# 发送一个轻量级请求建立连接 # 发送一个轻量级请求建立连接
resp = session.get(f"{BASE_URL}/admin/login.aspx", timeout=10, allow_redirects=False) resp = session.get(f"{BASE_URL}/admin/login.aspx", timeout=10, allow_redirects=False)
log(f"[OK] API 连接预热完成 (status={resp.status_code})") log(f" API 连接预热完成 (status={resp.status_code})")
session.close() session.close()
return True return True
except Exception as e: except Exception as e:

View File

@@ -1,13 +0,0 @@
<!doctype html>
<html lang="zh-CN">
<head>
<meta charset="UTF-8" />
<meta name="viewport" content="width=device-width, initial-scale=1.0, maximum-scale=1.0" />
<title>知识管理平台</title>
</head>
<body>
<noscript>该页面需要启用 JavaScript 才能使用。</noscript>
<div id="app"></div>
<script type="module" src="/src/login-main.js"></script>
</body>
</html>

View File

@@ -18,8 +18,6 @@
}, },
"devDependencies": { "devDependencies": {
"@vitejs/plugin-vue": "^6.0.1", "@vitejs/plugin-vue": "^6.0.1",
"unplugin-auto-import": "^21.0.0",
"unplugin-vue-components": "^31.0.0",
"vite": "^7.2.4" "vite": "^7.2.4"
} }
}, },
@@ -554,55 +552,12 @@
"integrity": "sha512-aGTxbpbg8/b5JfU1HXSrbH3wXZuLPJcNEcZQFMxLs3oSzgtVu6nFPkbbGGUvBcUjKV2YyB9Wxxabo+HEH9tcRQ==", "integrity": "sha512-aGTxbpbg8/b5JfU1HXSrbH3wXZuLPJcNEcZQFMxLs3oSzgtVu6nFPkbbGGUvBcUjKV2YyB9Wxxabo+HEH9tcRQ==",
"license": "MIT" "license": "MIT"
}, },
"node_modules/@jridgewell/gen-mapping": {
"version": "0.3.13",
"resolved": "https://registry.npmjs.org/@jridgewell/gen-mapping/-/gen-mapping-0.3.13.tgz",
"integrity": "sha512-2kkt/7niJ6MgEPxF0bYdQ6etZaA+fQvDcLKckhy1yIQOzaoKjBBjSj63/aLVjYE3qhRt5dvM+uUyfCg6UKCBbA==",
"dev": true,
"license": "MIT",
"dependencies": {
"@jridgewell/sourcemap-codec": "^1.5.0",
"@jridgewell/trace-mapping": "^0.3.24"
}
},
"node_modules/@jridgewell/remapping": {
"version": "2.3.5",
"resolved": "https://registry.npmjs.org/@jridgewell/remapping/-/remapping-2.3.5.tgz",
"integrity": "sha512-LI9u/+laYG4Ds1TDKSJW2YPrIlcVYOwi2fUC6xB43lueCjgxV4lffOCZCtYFiH6TNOX+tQKXx97T4IKHbhyHEQ==",
"dev": true,
"license": "MIT",
"dependencies": {
"@jridgewell/gen-mapping": "^0.3.5",
"@jridgewell/trace-mapping": "^0.3.24"
}
},
"node_modules/@jridgewell/resolve-uri": {
"version": "3.1.2",
"resolved": "https://registry.npmjs.org/@jridgewell/resolve-uri/-/resolve-uri-3.1.2.tgz",
"integrity": "sha512-bRISgCIjP20/tbWSPWMEi54QVPRZExkuD9lJL+UIxUKtwVJA8wW1Trb1jMs1RFXo1CBTNZ/5hpC9QvmKWdopKw==",
"dev": true,
"license": "MIT",
"engines": {
"node": ">=6.0.0"
}
},
"node_modules/@jridgewell/sourcemap-codec": { "node_modules/@jridgewell/sourcemap-codec": {
"version": "1.5.5", "version": "1.5.5",
"resolved": "https://registry.npmjs.org/@jridgewell/sourcemap-codec/-/sourcemap-codec-1.5.5.tgz", "resolved": "https://registry.npmjs.org/@jridgewell/sourcemap-codec/-/sourcemap-codec-1.5.5.tgz",
"integrity": "sha512-cYQ9310grqxueWbl+WuIUIaiUaDcj7WOq5fVhEljNVgRfOUhY9fy2zTvfoqWsnebh8Sl70VScFbICvJnLKB0Og==", "integrity": "sha512-cYQ9310grqxueWbl+WuIUIaiUaDcj7WOq5fVhEljNVgRfOUhY9fy2zTvfoqWsnebh8Sl70VScFbICvJnLKB0Og==",
"license": "MIT" "license": "MIT"
}, },
"node_modules/@jridgewell/trace-mapping": {
"version": "0.3.31",
"resolved": "https://registry.npmjs.org/@jridgewell/trace-mapping/-/trace-mapping-0.3.31.tgz",
"integrity": "sha512-zzNR+SdQSDJzc8joaeP8QQoCQr8NuYx2dIIytl1QeBEZHJ9uW6hebsrYgbz8hJwUQao3TWCMtmfV8Nu1twOLAw==",
"dev": true,
"license": "MIT",
"dependencies": {
"@jridgewell/resolve-uri": "^3.1.0",
"@jridgewell/sourcemap-codec": "^1.4.14"
}
},
"node_modules/@popperjs/core": { "node_modules/@popperjs/core": {
"name": "@sxzz/popperjs-es", "name": "@sxzz/popperjs-es",
"version": "2.11.7", "version": "2.11.7",
@@ -1202,19 +1157,6 @@
} }
} }
}, },
"node_modules/acorn": {
"version": "8.15.0",
"resolved": "https://registry.npmjs.org/acorn/-/acorn-8.15.0.tgz",
"integrity": "sha512-NZyJarBfL7nWwIq+FDL6Zp/yHEhePMNnnJ0y3qfieCrmNvYct8uvtiV41UvlSe6apAfk0fY1FbWx+NwfmpvtTg==",
"dev": true,
"license": "MIT",
"bin": {
"acorn": "bin/acorn"
},
"engines": {
"node": ">=0.4.0"
}
},
"node_modules/async-validator": { "node_modules/async-validator": {
"version": "4.2.5", "version": "4.2.5",
"resolved": "https://registry.npmjs.org/async-validator/-/async-validator-4.2.5.tgz", "resolved": "https://registry.npmjs.org/async-validator/-/async-validator-4.2.5.tgz",
@@ -1260,22 +1202,6 @@
"node": ">= 0.4" "node": ">= 0.4"
} }
}, },
"node_modules/chokidar": {
"version": "5.0.0",
"resolved": "https://registry.npmjs.org/chokidar/-/chokidar-5.0.0.tgz",
"integrity": "sha512-TQMmc3w+5AxjpL8iIiwebF73dRDF4fBIieAqGn9RGCWaEVwQ6Fb2cGe31Yns0RRIzii5goJ1Y7xbMwo1TxMplw==",
"dev": true,
"license": "MIT",
"dependencies": {
"readdirp": "^5.0.0"
},
"engines": {
"node": ">= 20.19.0"
},
"funding": {
"url": "https://paulmillr.com/funding/"
}
},
"node_modules/combined-stream": { "node_modules/combined-stream": {
"version": "1.0.8", "version": "1.0.8",
"resolved": "https://registry.npmjs.org/combined-stream/-/combined-stream-1.0.8.tgz", "resolved": "https://registry.npmjs.org/combined-stream/-/combined-stream-1.0.8.tgz",
@@ -1288,13 +1214,6 @@
"node": ">= 0.8" "node": ">= 0.8"
} }
}, },
"node_modules/confbox": {
"version": "0.2.4",
"resolved": "https://registry.npmjs.org/confbox/-/confbox-0.2.4.tgz",
"integrity": "sha512-ysOGlgTFbN2/Y6Cg3Iye8YKulHw+R2fNXHrgSmXISQdMnomY6eNDprVdW9R5xBguEqI954+S6709UyiO7B+6OQ==",
"dev": true,
"license": "MIT"
},
"node_modules/copy-anything": { "node_modules/copy-anything": {
"version": "4.0.5", "version": "4.0.5",
"resolved": "https://registry.npmjs.org/copy-anything/-/copy-anything-4.0.5.tgz", "resolved": "https://registry.npmjs.org/copy-anything/-/copy-anything-4.0.5.tgz",
@@ -1508,32 +1427,12 @@
"@esbuild/win32-x64": "0.25.12" "@esbuild/win32-x64": "0.25.12"
} }
}, },
"node_modules/escape-string-regexp": {
"version": "5.0.0",
"resolved": "https://registry.npmjs.org/escape-string-regexp/-/escape-string-regexp-5.0.0.tgz",
"integrity": "sha512-/veY75JbMK4j1yjvuUxuVsiS/hr/4iHs9FTT6cgTexxdE0Ly/glccBAkloH/DofkjRbZU3bnoj38mOmhkZ0lHw==",
"dev": true,
"license": "MIT",
"engines": {
"node": ">=12"
},
"funding": {
"url": "https://github.com/sponsors/sindresorhus"
}
},
"node_modules/estree-walker": { "node_modules/estree-walker": {
"version": "2.0.2", "version": "2.0.2",
"resolved": "https://registry.npmjs.org/estree-walker/-/estree-walker-2.0.2.tgz", "resolved": "https://registry.npmjs.org/estree-walker/-/estree-walker-2.0.2.tgz",
"integrity": "sha512-Rfkk/Mp/DL7JVje3u18FxFujQlTNR2q6QfMSMB7AvCBx91NGj/ba3kCfza0f6dVDbw7YlRf/nDrn7pQrCCyQ/w==", "integrity": "sha512-Rfkk/Mp/DL7JVje3u18FxFujQlTNR2q6QfMSMB7AvCBx91NGj/ba3kCfza0f6dVDbw7YlRf/nDrn7pQrCCyQ/w==",
"license": "MIT" "license": "MIT"
}, },
"node_modules/exsolve": {
"version": "1.0.8",
"resolved": "https://registry.npmjs.org/exsolve/-/exsolve-1.0.8.tgz",
"integrity": "sha512-LmDxfWXwcTArk8fUEnOfSZpHOJ6zOMUJKOtFLFqJLoKJetuQG874Uc7/Kki7zFLzYybmZhp1M7+98pfMqeX8yA==",
"dev": true,
"license": "MIT"
},
"node_modules/fdir": { "node_modules/fdir": {
"version": "6.5.0", "version": "6.5.0",
"resolved": "https://registry.npmjs.org/fdir/-/fdir-6.5.0.tgz", "resolved": "https://registry.npmjs.org/fdir/-/fdir-6.5.0.tgz",
@@ -1718,31 +1617,6 @@
"url": "https://github.com/sponsors/mesqueeb" "url": "https://github.com/sponsors/mesqueeb"
} }
}, },
"node_modules/js-tokens": {
"version": "9.0.1",
"resolved": "https://registry.npmjs.org/js-tokens/-/js-tokens-9.0.1.tgz",
"integrity": "sha512-mxa9E9ITFOt0ban3j6L5MpjwegGz6lBQmM1IJkWeBZGcMxto50+eWdjC/52xDbS2vy0k7vIMK0Fe2wfL9OQSpQ==",
"dev": true,
"license": "MIT"
},
"node_modules/local-pkg": {
"version": "1.1.2",
"resolved": "https://registry.npmjs.org/local-pkg/-/local-pkg-1.1.2.tgz",
"integrity": "sha512-arhlxbFRmoQHl33a0Zkle/YWlmNwoyt6QNZEIJcqNbdrsix5Lvc4HyyI3EnwxTYlZYc32EbYrQ8SzEZ7dqgg9A==",
"dev": true,
"license": "MIT",
"dependencies": {
"mlly": "^1.7.4",
"pkg-types": "^2.3.0",
"quansync": "^0.2.11"
},
"engines": {
"node": ">=14"
},
"funding": {
"url": "https://github.com/sponsors/antfu"
}
},
"node_modules/lodash": { "node_modules/lodash": {
"version": "4.17.21", "version": "4.17.21",
"resolved": "https://registry.npmjs.org/lodash/-/lodash-4.17.21.tgz", "resolved": "https://registry.npmjs.org/lodash/-/lodash-4.17.21.tgz",
@@ -1819,38 +1693,6 @@
"integrity": "sha512-vKivATfr97l2/QBCYAkXYDbrIWPM2IIKEl7YPhjCvKlG3kE2gm+uBo6nEXK3M5/Ffh/FLpKExzOQ3JJoJGFKBw==", "integrity": "sha512-vKivATfr97l2/QBCYAkXYDbrIWPM2IIKEl7YPhjCvKlG3kE2gm+uBo6nEXK3M5/Ffh/FLpKExzOQ3JJoJGFKBw==",
"license": "MIT" "license": "MIT"
}, },
"node_modules/mlly": {
"version": "1.8.0",
"resolved": "https://registry.npmjs.org/mlly/-/mlly-1.8.0.tgz",
"integrity": "sha512-l8D9ODSRWLe2KHJSifWGwBqpTZXIXTeo8mlKjY+E2HAakaTeNpqAyBZ8GSqLzHgw4XmHmC8whvpjJNMbFZN7/g==",
"dev": true,
"license": "MIT",
"dependencies": {
"acorn": "^8.15.0",
"pathe": "^2.0.3",
"pkg-types": "^1.3.1",
"ufo": "^1.6.1"
}
},
"node_modules/mlly/node_modules/confbox": {
"version": "0.1.8",
"resolved": "https://registry.npmjs.org/confbox/-/confbox-0.1.8.tgz",
"integrity": "sha512-RMtmw0iFkeR4YV+fUOSucriAQNb9g8zFR52MWCtl+cCZOFRNL6zeB395vPzFhEjjn4fMxXudmELnl/KF/WrK6w==",
"dev": true,
"license": "MIT"
},
"node_modules/mlly/node_modules/pkg-types": {
"version": "1.3.1",
"resolved": "https://registry.npmjs.org/pkg-types/-/pkg-types-1.3.1.tgz",
"integrity": "sha512-/Jm5M4RvtBFVkKWRu2BLUTNP8/M2a+UwuAX+ae4770q1qVGtfjG+WTCupoZixokjmHiry8uI+dlY8KXYV5HVVQ==",
"dev": true,
"license": "MIT",
"dependencies": {
"confbox": "^0.1.8",
"mlly": "^1.7.4",
"pathe": "^2.0.1"
}
},
"node_modules/ms": { "node_modules/ms": {
"version": "2.1.3", "version": "2.1.3",
"resolved": "https://registry.npmjs.org/ms/-/ms-2.1.3.tgz", "resolved": "https://registry.npmjs.org/ms/-/ms-2.1.3.tgz",
@@ -1881,24 +1723,6 @@
"integrity": "sha512-Wj7+EJQ8mSuXr2iWfnujrimU35R2W4FAErEyTmJoJ7ucwTn2hOUSsRehMb5RSYkxXGTM7Y9QpvPmp++w5ftoJw==", "integrity": "sha512-Wj7+EJQ8mSuXr2iWfnujrimU35R2W4FAErEyTmJoJ7ucwTn2hOUSsRehMb5RSYkxXGTM7Y9QpvPmp++w5ftoJw==",
"license": "BSD-3-Clause" "license": "BSD-3-Clause"
}, },
"node_modules/obug": {
"version": "2.1.1",
"resolved": "https://registry.npmjs.org/obug/-/obug-2.1.1.tgz",
"integrity": "sha512-uTqF9MuPraAQ+IsnPf366RG4cP9RtUi7MLO1N3KEc+wb0a6yKpeL0lmk2IB1jY5KHPAlTc6T/JRdC/YqxHNwkQ==",
"dev": true,
"funding": [
"https://github.com/sponsors/sxzz",
"https://opencollective.com/debug"
],
"license": "MIT"
},
"node_modules/pathe": {
"version": "2.0.3",
"resolved": "https://registry.npmjs.org/pathe/-/pathe-2.0.3.tgz",
"integrity": "sha512-WUjGcAqP1gQacoQe+OBJsFA7Ld4DyXuUIjZ5cc75cLHvJ7dtNsTugphxIADwspS+AraAUePCKrSVtPLFj/F88w==",
"dev": true,
"license": "MIT"
},
"node_modules/perfect-debounce": { "node_modules/perfect-debounce": {
"version": "1.0.0", "version": "1.0.0",
"resolved": "https://registry.npmjs.org/perfect-debounce/-/perfect-debounce-1.0.0.tgz", "resolved": "https://registry.npmjs.org/perfect-debounce/-/perfect-debounce-1.0.0.tgz",
@@ -1917,6 +1741,7 @@
"integrity": "sha512-5gTmgEY/sqK6gFXLIsQNH19lWb4ebPDLA4SdLP7dsWkIXHWlG66oPuVvXSGFPppYZz8ZDZq0dYYrbHfBCVUb1Q==", "integrity": "sha512-5gTmgEY/sqK6gFXLIsQNH19lWb4ebPDLA4SdLP7dsWkIXHWlG66oPuVvXSGFPppYZz8ZDZq0dYYrbHfBCVUb1Q==",
"dev": true, "dev": true,
"license": "MIT", "license": "MIT",
"peer": true,
"engines": { "engines": {
"node": ">=12" "node": ">=12"
}, },
@@ -1945,18 +1770,6 @@
} }
} }
}, },
"node_modules/pkg-types": {
"version": "2.3.0",
"resolved": "https://registry.npmjs.org/pkg-types/-/pkg-types-2.3.0.tgz",
"integrity": "sha512-SIqCzDRg0s9npO5XQ3tNZioRY1uK06lA41ynBC1YmFTmnY6FjUjVt6s4LoADmwoig1qqD0oK8h1p/8mlMx8Oig==",
"dev": true,
"license": "MIT",
"dependencies": {
"confbox": "^0.2.2",
"exsolve": "^1.0.7",
"pathe": "^2.0.3"
}
},
"node_modules/postcss": { "node_modules/postcss": {
"version": "8.5.6", "version": "8.5.6",
"resolved": "https://registry.npmjs.org/postcss/-/postcss-8.5.6.tgz", "resolved": "https://registry.npmjs.org/postcss/-/postcss-8.5.6.tgz",
@@ -1991,37 +1804,6 @@
"integrity": "sha512-D+zkORCbA9f1tdWRK0RaCR3GPv50cMxcrz4X8k5LTSUD1Dkw47mKJEZQNunItRTkWwgtaUSo1RVFRIG9ZXiFYg==", "integrity": "sha512-D+zkORCbA9f1tdWRK0RaCR3GPv50cMxcrz4X8k5LTSUD1Dkw47mKJEZQNunItRTkWwgtaUSo1RVFRIG9ZXiFYg==",
"license": "MIT" "license": "MIT"
}, },
"node_modules/quansync": {
"version": "0.2.11",
"resolved": "https://registry.npmjs.org/quansync/-/quansync-0.2.11.tgz",
"integrity": "sha512-AifT7QEbW9Nri4tAwR5M/uzpBuqfZf+zwaEM/QkzEjj7NBuFD2rBuy0K3dE+8wltbezDV7JMA0WfnCPYRSYbXA==",
"dev": true,
"funding": [
{
"type": "individual",
"url": "https://github.com/sponsors/antfu"
},
{
"type": "individual",
"url": "https://github.com/sponsors/sxzz"
}
],
"license": "MIT"
},
"node_modules/readdirp": {
"version": "5.0.0",
"resolved": "https://registry.npmjs.org/readdirp/-/readdirp-5.0.0.tgz",
"integrity": "sha512-9u/XQ1pvrQtYyMpZe7DXKv2p5CNvyVwzUB6uhLAnQwHMSgKMBR62lc7AHljaeteeHXn11XTAaLLUVZYVZyuRBQ==",
"dev": true,
"license": "MIT",
"engines": {
"node": ">= 20.19.0"
},
"funding": {
"type": "individual",
"url": "https://paulmillr.com/funding/"
}
},
"node_modules/rfdc": { "node_modules/rfdc": {
"version": "1.4.1", "version": "1.4.1",
"resolved": "https://registry.npmjs.org/rfdc/-/rfdc-1.4.1.tgz", "resolved": "https://registry.npmjs.org/rfdc/-/rfdc-1.4.1.tgz",
@@ -2070,13 +1852,6 @@
"fsevents": "~2.3.2" "fsevents": "~2.3.2"
} }
}, },
"node_modules/scule": {
"version": "1.3.0",
"resolved": "https://registry.npmjs.org/scule/-/scule-1.3.0.tgz",
"integrity": "sha512-6FtHJEvt+pVMIB9IBY+IcCJ6Z5f1iQnytgyfKMhDKgmzYG+TeH/wx1y3l27rshSbLiSanrR9ffZDrEsmjlQF2g==",
"dev": true,
"license": "MIT"
},
"node_modules/socket.io-client": { "node_modules/socket.io-client": {
"version": "4.8.1", "version": "4.8.1",
"resolved": "https://registry.npmjs.org/socket.io-client/-/socket.io-client-4.8.1.tgz", "resolved": "https://registry.npmjs.org/socket.io-client/-/socket.io-client-4.8.1.tgz",
@@ -2123,19 +1898,6 @@
"node": ">=0.10.0" "node": ">=0.10.0"
} }
}, },
"node_modules/strip-literal": {
"version": "3.1.0",
"resolved": "https://registry.npmjs.org/strip-literal/-/strip-literal-3.1.0.tgz",
"integrity": "sha512-8r3mkIM/2+PpjHoOtiAW8Rg3jJLHaV7xPwG+YRGrv6FP0wwk/toTpATxWYOW0BKdWwl82VT2tFYi5DlROa0Mxg==",
"dev": true,
"license": "MIT",
"dependencies": {
"js-tokens": "^9.0.1"
},
"funding": {
"url": "https://github.com/sponsors/antfu"
}
},
"node_modules/superjson": { "node_modules/superjson": {
"version": "2.2.6", "version": "2.2.6",
"resolved": "https://registry.npmjs.org/superjson/-/superjson-2.2.6.tgz", "resolved": "https://registry.npmjs.org/superjson/-/superjson-2.2.6.tgz",
@@ -2165,148 +1927,6 @@
"url": "https://github.com/sponsors/SuperchupuDev" "url": "https://github.com/sponsors/SuperchupuDev"
} }
}, },
"node_modules/ufo": {
"version": "1.6.3",
"resolved": "https://registry.npmjs.org/ufo/-/ufo-1.6.3.tgz",
"integrity": "sha512-yDJTmhydvl5lJzBmy/hyOAA0d+aqCBuwl818haVdYCRrWV84o7YyeVm4QlVHStqNrrJSTb6jKuFAVqAFsr+K3Q==",
"dev": true,
"license": "MIT"
},
"node_modules/unimport": {
"version": "5.6.0",
"resolved": "https://registry.npmjs.org/unimport/-/unimport-5.6.0.tgz",
"integrity": "sha512-8rqAmtJV8o60x46kBAJKtHpJDJWkA2xcBqWKPI14MgUb05o1pnpnCnXSxedUXyeq7p8fR5g3pTo2BaswZ9lD9A==",
"dev": true,
"license": "MIT",
"dependencies": {
"acorn": "^8.15.0",
"escape-string-regexp": "^5.0.0",
"estree-walker": "^3.0.3",
"local-pkg": "^1.1.2",
"magic-string": "^0.30.21",
"mlly": "^1.8.0",
"pathe": "^2.0.3",
"picomatch": "^4.0.3",
"pkg-types": "^2.3.0",
"scule": "^1.3.0",
"strip-literal": "^3.1.0",
"tinyglobby": "^0.2.15",
"unplugin": "^2.3.11",
"unplugin-utils": "^0.3.1"
},
"engines": {
"node": ">=18.12.0"
}
},
"node_modules/unimport/node_modules/estree-walker": {
"version": "3.0.3",
"resolved": "https://registry.npmjs.org/estree-walker/-/estree-walker-3.0.3.tgz",
"integrity": "sha512-7RUKfXgSMMkzt6ZuXmqapOurLGPPfgj6l9uRZ7lRGolvk0y2yocc35LdcxKC5PQZdn2DMqioAQ2NoWcrTKmm6g==",
"dev": true,
"license": "MIT",
"dependencies": {
"@types/estree": "^1.0.0"
}
},
"node_modules/unplugin": {
"version": "2.3.11",
"resolved": "https://registry.npmjs.org/unplugin/-/unplugin-2.3.11.tgz",
"integrity": "sha512-5uKD0nqiYVzlmCRs01Fhs2BdkEgBS3SAVP6ndrBsuK42iC2+JHyxM05Rm9G8+5mkmRtzMZGY8Ct5+mliZxU/Ww==",
"dev": true,
"license": "MIT",
"dependencies": {
"@jridgewell/remapping": "^2.3.5",
"acorn": "^8.15.0",
"picomatch": "^4.0.3",
"webpack-virtual-modules": "^0.6.2"
},
"engines": {
"node": ">=18.12.0"
}
},
"node_modules/unplugin-auto-import": {
"version": "21.0.0",
"resolved": "https://registry.npmjs.org/unplugin-auto-import/-/unplugin-auto-import-21.0.0.tgz",
"integrity": "sha512-vWuC8SwqJmxZFYwPojhOhOXDb5xFhNNcEVb9K/RFkyk/3VnfaOjzitWN7v+8DEKpMjSsY2AEGXNgt6I0yQrhRQ==",
"dev": true,
"license": "MIT",
"dependencies": {
"local-pkg": "^1.1.2",
"magic-string": "^0.30.21",
"picomatch": "^4.0.3",
"unimport": "^5.6.0",
"unplugin": "^2.3.11",
"unplugin-utils": "^0.3.1"
},
"engines": {
"node": ">=20.19.0"
},
"funding": {
"url": "https://github.com/sponsors/antfu"
},
"peerDependencies": {
"@nuxt/kit": "^4.0.0",
"@vueuse/core": "*"
},
"peerDependenciesMeta": {
"@nuxt/kit": {
"optional": true
},
"@vueuse/core": {
"optional": true
}
}
},
"node_modules/unplugin-utils": {
"version": "0.3.1",
"resolved": "https://registry.npmjs.org/unplugin-utils/-/unplugin-utils-0.3.1.tgz",
"integrity": "sha512-5lWVjgi6vuHhJ526bI4nlCOmkCIF3nnfXkCMDeMJrtdvxTs6ZFCM8oNufGTsDbKv/tJ/xj8RpvXjRuPBZJuJog==",
"dev": true,
"license": "MIT",
"dependencies": {
"pathe": "^2.0.3",
"picomatch": "^4.0.3"
},
"engines": {
"node": ">=20.19.0"
},
"funding": {
"url": "https://github.com/sponsors/sxzz"
}
},
"node_modules/unplugin-vue-components": {
"version": "31.0.0",
"resolved": "https://registry.npmjs.org/unplugin-vue-components/-/unplugin-vue-components-31.0.0.tgz",
"integrity": "sha512-4ULwfTZTLuWJ7+S9P7TrcStYLsSRkk6vy2jt/WTfgUEUb0nW9//xxmrfhyHUEVpZ2UKRRwfRb8Yy15PDbVZf+Q==",
"dev": true,
"license": "MIT",
"dependencies": {
"chokidar": "^5.0.0",
"local-pkg": "^1.1.2",
"magic-string": "^0.30.21",
"mlly": "^1.8.0",
"obug": "^2.1.1",
"picomatch": "^4.0.3",
"tinyglobby": "^0.2.15",
"unplugin": "^2.3.11",
"unplugin-utils": "^0.3.1"
},
"engines": {
"node": ">=20.19.0"
},
"funding": {
"url": "https://github.com/sponsors/antfu"
},
"peerDependencies": {
"@nuxt/kit": "^3.2.2 || ^4.0.0",
"vue": "^3.0.0"
},
"peerDependenciesMeta": {
"@nuxt/kit": {
"optional": true
}
}
},
"node_modules/vite": { "node_modules/vite": {
"version": "7.2.7", "version": "7.2.7",
"resolved": "https://registry.npmjs.org/vite/-/vite-7.2.7.tgz", "resolved": "https://registry.npmjs.org/vite/-/vite-7.2.7.tgz",
@@ -2426,13 +2046,6 @@
"integrity": "sha512-sGhTPMuXqZ1rVOk32RylztWkfXTRhuS7vgAKv0zjqk8gbsHkJ7xfFf+jbySxt7tWObEJwyKaHMikV/WGDiQm8g==", "integrity": "sha512-sGhTPMuXqZ1rVOk32RylztWkfXTRhuS7vgAKv0zjqk8gbsHkJ7xfFf+jbySxt7tWObEJwyKaHMikV/WGDiQm8g==",
"license": "MIT" "license": "MIT"
}, },
"node_modules/webpack-virtual-modules": {
"version": "0.6.2",
"resolved": "https://registry.npmjs.org/webpack-virtual-modules/-/webpack-virtual-modules-0.6.2.tgz",
"integrity": "sha512-66/V2i5hQanC51vBQKPH4aI8NMAcBW59FVBs+rC7eGHupMyfn34q7rZIE+ETlJ+XTevqfUhVVBgSUNSW2flEUQ==",
"dev": true,
"license": "MIT"
},
"node_modules/ws": { "node_modules/ws": {
"version": "8.17.1", "version": "8.17.1",
"resolved": "https://registry.npmjs.org/ws/-/ws-8.17.1.tgz", "resolved": "https://registry.npmjs.org/ws/-/ws-8.17.1.tgz",

View File

@@ -19,8 +19,7 @@
}, },
"devDependencies": { "devDependencies": {
"@vitejs/plugin-vue": "^6.0.1", "@vitejs/plugin-vue": "^6.0.1",
"unplugin-auto-import": "^21.0.0",
"unplugin-vue-components": "^31.0.0",
"vite": "^7.2.4" "vite": "^7.2.4"
} }
} }

View File

@@ -15,16 +15,6 @@ export async function login(payload) {
return data return data
} }
export async function passkeyLoginOptions(payload) {
const { data } = await publicApi.post('/passkeys/login/options', payload)
return data
}
export async function passkeyLoginVerify(payload) {
const { data } = await publicApi.post('/passkeys/login/verify', payload)
return data
}
export async function register(payload) { export async function register(payload) {
const { data } = await publicApi.post('/register', payload) const { data } = await publicApi.post('/register', payload)
return data return data

View File

@@ -1,81 +1,15 @@
import axios from 'axios' import axios from 'axios'
import { ElMessage } from 'element-plus'
let lastToastKey = '' let lastToastKey = ''
let lastToastAt = 0 let lastToastAt = 0
const RETRYABLE_STATUS = new Set([408, 425, 429, 500, 502, 503, 504])
const MAX_RETRY_COUNT = 1
const RETRY_BASE_DELAY_MS = 300
const TOAST_STYLE_ID = 'zsglpt-lite-toast-style'
function ensureToastStyle() {
if (typeof document === 'undefined') return
if (document.getElementById(TOAST_STYLE_ID)) return
const style = document.createElement('style')
style.id = TOAST_STYLE_ID
style.textContent = `
.zsglpt-lite-toast-wrap {
position: fixed;
right: 16px;
top: 16px;
z-index: 9999;
display: flex;
flex-direction: column;
gap: 8px;
pointer-events: none;
}
.zsglpt-lite-toast {
max-width: min(88vw, 420px);
padding: 10px 12px;
border-radius: 10px;
color: #fff;
font-size: 13px;
font-weight: 600;
box-shadow: 0 8px 20px rgba(15, 23, 42, 0.24);
opacity: 0;
transform: translateY(-6px);
transition: all .18s ease;
}
.zsglpt-lite-toast.is-visible {
opacity: 1;
transform: translateY(0);
}
.zsglpt-lite-toast.is-error {
background: linear-gradient(135deg, #ef4444, #dc2626);
}
`
document.head.appendChild(style)
}
function ensureToastWrap() {
if (typeof document === 'undefined') return null
ensureToastStyle()
let wrap = document.querySelector('.zsglpt-lite-toast-wrap')
if (wrap) return wrap
wrap = document.createElement('div')
wrap.className = 'zsglpt-lite-toast-wrap'
document.body.appendChild(wrap)
return wrap
}
function showLiteToast(message) {
const wrap = ensureToastWrap()
if (!wrap) return
const node = document.createElement('div')
node.className = 'zsglpt-lite-toast is-error'
node.textContent = String(message || '请求失败')
wrap.appendChild(node)
requestAnimationFrame(() => node.classList.add('is-visible'))
window.setTimeout(() => node.classList.remove('is-visible'), 2300)
window.setTimeout(() => node.remove(), 2600)
}
function toastErrorOnce(key, message, minIntervalMs = 1500) { function toastErrorOnce(key, message, minIntervalMs = 1500) {
const now = Date.now() const now = Date.now()
if (key === lastToastKey && now - lastToastAt < minIntervalMs) return if (key === lastToastKey && now - lastToastAt < minIntervalMs) return
lastToastKey = key lastToastKey = key
lastToastAt = now lastToastAt = now
showLiteToast(message) ElMessage.error(message)
} }
function getCookie(name) { function getCookie(name) {
@@ -84,41 +18,6 @@ function getCookie(name) {
return match ? decodeURIComponent(match[1]) : '' return match ? decodeURIComponent(match[1]) : ''
} }
function isIdempotentMethod(method) {
return ['GET', 'HEAD', 'OPTIONS'].includes(String(method || 'GET').toUpperCase())
}
function shouldRetryRequest(error) {
const config = error?.config
if (!config || config.__no_retry) return false
if (!isIdempotentMethod(config.method)) return false
const retried = Number(config.__retry_count || 0)
if (retried >= MAX_RETRY_COUNT) return false
const code = String(error?.code || '')
if (code === 'ECONNABORTED' || code === 'ERR_NETWORK') return true
const status = Number(error?.response?.status || 0)
return RETRYABLE_STATUS.has(status)
}
function delay(ms) {
return new Promise((resolve) => {
window.setTimeout(resolve, Math.max(0, Number(ms || 0)))
})
}
async function retryRequestOnce(error, client) {
const config = error?.config || {}
const retried = Number(config.__retry_count || 0)
config.__retry_count = retried + 1
const backoffMs = RETRY_BASE_DELAY_MS * (retried + 1)
await delay(backoffMs)
return client.request(config)
}
export const publicApi = axios.create({ export const publicApi = axios.create({
baseURL: '/api', baseURL: '/api',
timeout: 30_000, timeout: 30_000,
@@ -140,21 +39,14 @@ publicApi.interceptors.request.use((config) => {
publicApi.interceptors.response.use( publicApi.interceptors.response.use(
(response) => response, (response) => response,
(error) => { (error) => {
if (shouldRetryRequest(error)) {
return retryRequestOnce(error, publicApi)
}
const status = error?.response?.status const status = error?.response?.status
const payload = error?.response?.data const payload = error?.response?.data
const message = payload?.error || payload?.message || error?.message || '请求失败' const message = payload?.error || payload?.message || error?.message || '请求失败'
if (status === 401) { if (status === 401) {
toastErrorOnce('401', message || '登录已过期,请重新登录', 3000)
const pathname = window.location?.pathname || '' const pathname = window.location?.pathname || ''
// 登录页面不弹通知,让 LoginPage.vue 自己处理错误显示 if (!pathname.startsWith('/login')) window.location.href = '/login'
if (!pathname.startsWith('/login')) {
toastErrorOnce('401', message || '登录已过期,请重新登录', 3000)
window.location.href = '/login'
}
} else if (status === 403) { } else if (status === 403) {
toastErrorOnce('403', message || '无权限', 5000) toastErrorOnce('403', message || '无权限', 5000)
} else if (error?.code === 'ECONNABORTED') { } else if (error?.code === 'ECONNABORTED') {

View File

@@ -1,7 +1,7 @@
import { publicApi } from './http' import { publicApi } from './http'
export async function fetchSchedules(params = {}) { export async function fetchSchedules() {
const { data } = await publicApi.get('/schedules', { params }) const { data } = await publicApi.get('/schedules')
return data return data
} }
@@ -39,3 +39,4 @@ export async function clearScheduleLogs(scheduleId) {
const { data } = await publicApi.delete(`/schedules/${scheduleId}/logs`) const { data } = await publicApi.delete(`/schedules/${scheduleId}/logs`)
return data return data
} }

View File

@@ -1,7 +1,7 @@
import { publicApi } from './http' import { publicApi } from './http'
export async function fetchScreenshots(params = {}) { export async function fetchScreenshots() {
const { data } = await publicApi.get('/screenshots', { params }) const { data } = await publicApi.get('/screenshots')
return data return data
} }
@@ -14,3 +14,4 @@ export async function clearScreenshots() {
const { data } = await publicApi.post('/screenshots/clear', {}) const { data } = await publicApi.post('/screenshots/clear', {})
return data return data
} }

View File

@@ -39,33 +39,3 @@ export async function updateKdocsSettings(payload) {
const { data } = await publicApi.post('/user/kdocs', payload) const { data } = await publicApi.post('/user/kdocs', payload)
return data return data
} }
export async function fetchKdocsStatus() {
const { data } = await publicApi.get('/kdocs/status')
return data
}
export async function fetchUserPasskeys() {
const { data } = await publicApi.get('/user/passkeys')
return data
}
export async function createUserPasskeyOptions(payload) {
const { data } = await publicApi.post('/user/passkeys/register/options', payload)
return data
}
export async function createUserPasskeyVerify(payload) {
const { data } = await publicApi.post('/user/passkeys/register/verify', payload)
return data
}
export async function deleteUserPasskey(passkeyId) {
const { data } = await publicApi.delete(`/user/passkeys/${passkeyId}`)
return data
}
export async function reportUserPasskeyClientError(payload) {
const { data } = await publicApi.post('/user/passkeys/client-error', payload || {})
return data
}

View File

@@ -3,28 +3,20 @@ import { computed, onBeforeUnmount, onMounted, reactive, ref } from 'vue'
import { useRoute, useRouter } from 'vue-router' import { useRoute, useRouter } from 'vue-router'
import { ElMessage, ElMessageBox } from 'element-plus' import { ElMessage, ElMessageBox } from 'element-plus'
import { Calendar, Camera, User } from '@element-plus/icons-vue' import { Calendar, Camera, User } from '@element-plus/icons-vue'
import 'element-plus/es/components/message/style/css'
import 'element-plus/es/components/message-box/style/css'
import { fetchActiveAnnouncement, dismissAnnouncement } from '../api/announcements' import { fetchActiveAnnouncement, dismissAnnouncement } from '../api/announcements'
import { fetchMyFeedbacks, submitFeedback } from '../api/feedback' import { fetchMyFeedbacks, submitFeedback } from '../api/feedback'
import { import {
bindEmail, bindEmail,
changePassword, changePassword,
createUserPasskeyOptions,
createUserPasskeyVerify,
deleteUserPasskey,
fetchEmailNotify, fetchEmailNotify,
fetchUserPasskeys,
fetchUserEmail, fetchUserEmail,
fetchKdocsSettings, fetchKdocsSettings,
reportUserPasskeyClientError,
unbindEmail, unbindEmail,
updateKdocsSettings, updateKdocsSettings,
updateEmailNotify, updateEmailNotify,
} from '../api/settings' } from '../api/settings'
import { useUserStore } from '../stores/user' import { useUserStore } from '../stores/user'
import { createPasskey, getPasskeyClientErrorMessage, isPasskeyAvailable } from '../utils/passkey'
import { validateStrongPassword } from '../utils/password' import { validateStrongPassword } from '../utils/password'
const route = useRoute() const route = useRoute()
@@ -124,13 +116,6 @@ const passwordForm = reactive({
const kdocsLoading = ref(false) const kdocsLoading = ref(false)
const kdocsSaving = ref(false) const kdocsSaving = ref(false)
const kdocsUnitValue = ref('') const kdocsUnitValue = ref('')
const passkeyLoading = ref(false)
const passkeyAddLoading = ref(false)
const passkeyDeviceName = ref('')
const passkeyItems = ref([])
const passkeyRegisterOptions = ref(null)
const passkeyRegisterOptionsAt = ref(0)
const PASSKEY_OPTIONS_PREFETCH_MAX_AGE_MS = 240000
function syncIsMobile() { function syncIsMobile() {
isMobile.value = Boolean(mediaQuery?.matches) isMobile.value = Boolean(mediaQuery?.matches)
@@ -167,26 +152,16 @@ async function go(path) {
} }
async function logout() { async function logout() {
let confirmed = false
try { try {
await ElMessageBox.confirm('确定退出登录吗?', '退出登录', { await ElMessageBox.confirm('确定退出登录吗?', '退出登录', {
confirmButtonText: '退出', confirmButtonText: '退出',
cancelButtonText: '取消', cancelButtonText: '取消',
type: 'warning', type: 'warning',
}) })
confirmed = true } catch {
} catch (error) { return
const reason = String(error || '').toLowerCase()
if (reason === 'cancel' || reason === 'close') return
try {
confirmed = window.confirm('确定退出登录吗?')
} catch {
confirmed = false
}
} }
if (!confirmed) return
await userStore.logout() await userStore.logout()
window.location.href = '/login' window.location.href = '/login'
} }
@@ -262,7 +237,7 @@ async function openSettings() {
} }
async function loadSettings() { async function loadSettings() {
await Promise.all([loadEmailInfo(), loadEmailNotify(), loadKdocsSettings(), loadPasskeys()]) await Promise.all([loadEmailInfo(), loadEmailNotify(), loadKdocsSettings()])
} }
async function loadEmailInfo() { async function loadEmailInfo() {
@@ -317,113 +292,6 @@ async function saveKdocsSettings() {
} }
} }
async function loadPasskeys() {
passkeyLoading.value = true
try {
const data = await fetchUserPasskeys()
passkeyItems.value = Array.isArray(data?.items) ? data.items : []
if (passkeyItems.value.length < 3) {
await prefetchPasskeyRegisterOptions()
} else {
passkeyRegisterOptions.value = null
passkeyRegisterOptionsAt.value = 0
}
} catch {
passkeyItems.value = []
passkeyRegisterOptions.value = null
passkeyRegisterOptionsAt.value = 0
} finally {
passkeyLoading.value = false
}
}
function getCachedPasskeyRegisterOptions() {
if (!passkeyRegisterOptions.value) return null
if (Date.now() - Number(passkeyRegisterOptionsAt.value || 0) > PASSKEY_OPTIONS_PREFETCH_MAX_AGE_MS) return null
return passkeyRegisterOptions.value
}
async function prefetchPasskeyRegisterOptions() {
try {
const res = await createUserPasskeyOptions({})
passkeyRegisterOptions.value = res
passkeyRegisterOptionsAt.value = Date.now()
} catch {
passkeyRegisterOptions.value = null
passkeyRegisterOptionsAt.value = 0
}
}
async function onAddPasskey() {
if (!isPasskeyAvailable()) {
ElMessage.error('当前浏览器或环境不支持Passkey需 HTTPS')
return
}
if (passkeyItems.value.length >= 3) {
ElMessage.error('最多可绑定3台设备')
return
}
passkeyAddLoading.value = true
try {
let optionsRes = getCachedPasskeyRegisterOptions()
if (!optionsRes) {
optionsRes = await createUserPasskeyOptions({})
}
const credential = await createPasskey(optionsRes?.publicKey || {})
await createUserPasskeyVerify({ credential, device_name: passkeyDeviceName.value.trim() })
passkeyRegisterOptions.value = null
passkeyRegisterOptionsAt.value = 0
passkeyDeviceName.value = ''
ElMessage.success('Passkey设备添加成功')
await loadPasskeys()
} catch (e) {
try {
await reportUserPasskeyClientError({
stage: 'register',
source: 'user-settings',
name: e?.name || '',
message: e?.message || '',
code: e?.code || '',
user_agent: navigator.userAgent || '',
})
} catch {
// ignore report failure
}
passkeyRegisterOptions.value = null
passkeyRegisterOptionsAt.value = 0
await prefetchPasskeyRegisterOptions()
const data = e?.response?.data
const message =
data?.error ||
getPasskeyClientErrorMessage(e, 'Passkey注册')
ElMessage.error(message)
} finally {
passkeyAddLoading.value = false
}
}
async function onDeletePasskey(item) {
try {
await ElMessageBox.confirm(`确定删除设备「${item?.device_name || '未命名设备'}」吗?`, '删除Passkey设备', {
confirmButtonText: '删除',
cancelButtonText: '取消',
type: 'warning',
})
} catch {
return
}
try {
await deleteUserPasskey(item.id)
ElMessage.success('设备已删除')
await loadPasskeys()
} catch (e) {
const data = e?.response?.data
ElMessage.error(data?.error || '删除失败')
}
}
async function onBindEmail() { async function onBindEmail() {
const email = bindEmailValue.value.trim().toLowerCase() const email = bindEmailValue.value.trim().toLowerCase()
if (!email) { if (!email) {
@@ -797,47 +665,6 @@ async function dismissAnnouncementPermanently() {
</div> </div>
</el-tab-pane> </el-tab-pane>
<el-tab-pane label="Passkey设备" name="passkeys">
<div class="settings-section" v-loading="passkeyLoading">
<el-alert
type="info"
:closable="false"
title="最多可绑定3台设备用于无密码登录。"
show-icon
class="settings-alert"
/>
<el-form inline>
<el-form-item label="设备备注">
<el-input
v-model="passkeyDeviceName"
placeholder="例如我的iPhone / 办公Mac"
maxlength="40"
show-word-limit
/>
</el-form-item>
<el-form-item>
<el-button type="primary" :loading="passkeyAddLoading" @click="onAddPasskey">
添加Passkey设备
</el-button>
</el-form-item>
</el-form>
<el-empty v-if="passkeyItems.length === 0" description="暂无Passkey设备" />
<el-table v-else :data="passkeyItems" size="small" style="width: 100%">
<el-table-column prop="device_name" label="设备备注" min-width="160" />
<el-table-column prop="credential_id_preview" label="凭据ID" min-width="180" />
<el-table-column prop="last_used_at" label="最近使用" min-width="140" />
<el-table-column prop="created_at" label="创建时间" min-width="140" />
<el-table-column label="操作" width="100" fixed="right">
<template #default="{ row }">
<el-button type="danger" text @click="onDeletePasskey(row)">删除</el-button>
</template>
</el-table-column>
</el-table>
</div>
</el-tab-pane>
<el-tab-pane label="表格上传" name="kdocs"> <el-tab-pane label="表格上传" name="kdocs">
<div v-loading="kdocsLoading" class="settings-section"> <div v-loading="kdocsLoading" class="settings-section">
<el-form label-position="top"> <el-form label-position="top">

View File

@@ -1,6 +0,0 @@
import { createApp } from 'vue'
import LoginPage from './pages/LoginPage.vue'
import './style.css'
createApp(LoginPage).mount('#app')

View File

@@ -5,6 +5,11 @@ import router from './router'
import { createPinia } from 'pinia' import { createPinia } from 'pinia'
import ElementPlus from 'element-plus'
import zhCn from 'element-plus/es/locale/lang/zh-cn'
import 'element-plus/dist/index.css'
import './style.css' import './style.css'
createApp(App).use(createPinia()).use(router).mount('#app') createApp(App).use(createPinia()).use(router).use(ElementPlus, { locale: zhCn }).mount('#app')

View File

@@ -15,7 +15,7 @@ import {
updateAccount, updateAccount,
updateAccountRemark, updateAccountRemark,
} from '../api/accounts' } from '../api/accounts'
import { fetchKdocsSettings, fetchKdocsStatus, updateKdocsSettings } from '../api/settings' import { fetchKdocsSettings, updateKdocsSettings } from '../api/settings'
import { fetchRunStats } from '../api/stats' import { fetchRunStats } from '../api/stats'
import { useSocket } from '../composables/useSocket' import { useSocket } from '../composables/useSocket'
import { useUserStore } from '../stores/user' import { useUserStore } from '../stores/user'
@@ -61,14 +61,6 @@ watch(batchEnableScreenshot, (value) => {
const kdocsAutoUpload = ref(false) const kdocsAutoUpload = ref(false)
const kdocsSettingsLoading = ref(false) const kdocsSettingsLoading = ref(false)
// KDocs 在线状态
const kdocsStatus = reactive({
enabled: false,
online: false,
message: '',
})
const kdocsStatusLoading = ref(false)
const addOpen = ref(false) const addOpen = ref(false)
const editOpen = ref(false) const editOpen = ref(false)
const upgradeOpen = ref(false) const upgradeOpen = ref(false)
@@ -147,12 +139,10 @@ function toPercent(acc) {
function statusTagType(status = '') { function statusTagType(status = '') {
const text = String(status) const text = String(status)
if (text.includes('已完成') || text.includes('完成')) return 'success' // 绿色 if (text.includes('已完成') || text.includes('完成')) return 'success'
if (text.includes('失败') || text.includes('错误') || text.includes('异常') || text.includes('登录失败')) return 'danger' // 红色 if (text.includes('失败') || text.includes('错误') || text.includes('异常') || text.includes('登录失败')) return 'danger'
if (text.includes('上传截图')) return 'danger' // 上传中:红色 if (text.includes('排队') || text.includes('运行') || text.includes('截图')) return 'warning'
if (text.includes('等待上传')) return 'warning' // 等待上传:黄色 return 'info'
if (text.includes('排队') || text.includes('运行') || text.includes('截图')) return 'warning' // 黄色
return 'info' // 灰色
} }
function showRuntimeProgress(acc) { function showRuntimeProgress(acc) {
@@ -215,22 +205,6 @@ async function loadKdocsSettings() {
} }
} }
async function loadKdocsStatus() {
kdocsStatusLoading.value = true
try {
const data = await fetchKdocsStatus()
kdocsStatus.enabled = Boolean(data?.enabled)
kdocsStatus.online = Boolean(data?.online)
kdocsStatus.message = data?.message || ''
} catch {
kdocsStatus.enabled = false
kdocsStatus.online = false
kdocsStatus.message = ''
} finally {
kdocsStatusLoading.value = false
}
}
async function onToggleKdocsAutoUpload(value) { async function onToggleKdocsAutoUpload(value) {
kdocsSettingsLoading.value = true kdocsSettingsLoading.value = true
try { try {
@@ -532,17 +506,6 @@ function bindSocket() {
let unbindSocket = null let unbindSocket = null
let statsTimer = null let statsTimer = null
let kdocsStatusTimer = null
const STATS_POLL_ACTIVE_MS = 10_000
const STATS_POLL_HIDDEN_MS = 30_000
const KDOCS_STATUS_POLL_ACTIVE_MS = 60_000
const KDOCS_STATUS_POLL_HIDDEN_MS = 180_000
function isPageHidden() {
if (typeof document === 'undefined') return false
return document.visibilityState === 'hidden'
}
const shouldPollStats = computed(() => { const shouldPollStats = computed(() => {
// 仅在“真正执行中”才轮询(排队中不轮询,避免空转导致页面闪烁) // 仅在“真正执行中”才轮询(排队中不轮询,避免空转导致页面闪烁)
@@ -554,27 +517,15 @@ const shouldPollStats = computed(() => {
}) })
}) })
function currentStatsPollDelay() {
return isPageHidden() ? STATS_POLL_HIDDEN_MS : STATS_POLL_ACTIVE_MS
}
function stopStatsPolling() { function stopStatsPolling() {
if (!statsTimer) return if (!statsTimer) return
window.clearTimeout(statsTimer) window.clearInterval(statsTimer)
statsTimer = null statsTimer = null
} }
function scheduleStatsPolling() {
if (statsTimer || !shouldPollStats.value) return
statsTimer = window.setTimeout(async () => {
statsTimer = null
await refreshStats({ silent: true }).catch(() => {})
scheduleStatsPolling()
}, currentStatsPollDelay())
}
function startStatsPolling() { function startStatsPolling() {
scheduleStatsPolling() if (statsTimer) return
statsTimer = window.setInterval(() => refreshStats({ silent: true }), 10_000)
} }
function syncStatsPolling(prevRunning = null) { function syncStatsPolling(prevRunning = null) {
@@ -587,38 +538,10 @@ function syncStatsPolling(prevRunning = null) {
else stopStatsPolling() else stopStatsPolling()
} }
watch(shouldPollStats, (_running, prevRunning) => { watch(shouldPollStats, (running, prevRunning) => {
syncStatsPolling(prevRunning) syncStatsPolling(prevRunning)
}) })
function currentKdocsStatusPollDelay() {
return isPageHidden() ? KDOCS_STATUS_POLL_HIDDEN_MS : KDOCS_STATUS_POLL_ACTIVE_MS
}
function stopKdocsStatusPolling() {
if (!kdocsStatusTimer) return
window.clearTimeout(kdocsStatusTimer)
kdocsStatusTimer = null
}
function scheduleKdocsStatusPolling() {
if (kdocsStatusTimer) return
kdocsStatusTimer = window.setTimeout(async () => {
kdocsStatusTimer = null
await loadKdocsStatus().catch(() => {})
scheduleKdocsStatusPolling()
}, currentKdocsStatusPollDelay())
}
function restartTimedPollingOnVisibilityChange() {
if (shouldPollStats.value) {
stopStatsPolling()
startStatsPolling()
}
stopKdocsStatusPolling()
scheduleKdocsStatusPolling()
}
onMounted(async () => { onMounted(async () => {
if (!userStore.vipInfo) { if (!userStore.vipInfo) {
userStore.refreshVipInfo().catch(() => { userStore.refreshVipInfo().catch(() => {
@@ -630,19 +553,13 @@ onMounted(async () => {
await refreshAccounts() await refreshAccounts()
await loadKdocsSettings() await loadKdocsSettings()
await loadKdocsStatus()
await refreshStats() await refreshStats()
syncStatsPolling() syncStatsPolling()
scheduleKdocsStatusPolling()
window.addEventListener('visibilitychange', restartTimedPollingOnVisibilityChange)
}) })
onBeforeUnmount(() => { onBeforeUnmount(() => {
if (unbindSocket) unbindSocket() if (unbindSocket) unbindSocket()
stopStatsPolling() stopStatsPolling()
stopKdocsStatusPolling()
window.removeEventListener('visibilitychange', restartTimedPollingOnVisibilityChange)
}) })
</script> </script>
@@ -733,9 +650,6 @@ onBeforeUnmount(() => {
@change="onToggleKdocsAutoUpload" @change="onToggleKdocsAutoUpload"
/> />
<span class="app-muted">表格(测试)</span> <span class="app-muted">表格(测试)</span>
<el-tag v-if="kdocsStatus.enabled" :type="kdocsStatus.online ? 'success' : 'warning'" size="small" effect="plain">
{{ kdocsStatus.online ? ' 就绪' : ' 离线' }}
</el-tag>
</div> </div>
<div class="toolbar-right"> <div class="toolbar-right">

File diff suppressed because it is too large Load Diff

View File

@@ -19,9 +19,6 @@ const userStore = useUserStore()
const loading = ref(false) const loading = ref(false)
const schedules = ref([]) const schedules = ref([])
const schedulePage = ref(1)
const scheduleTotal = ref(0)
const schedulePageSize = 12
const accountsLoading = ref(false) const accountsLoading = ref(false)
const accountOptions = ref([]) const accountOptions = ref([])
@@ -68,7 +65,6 @@ const weekdayOptions = [
] ]
const canUseSchedule = computed(() => userStore.isVip) const canUseSchedule = computed(() => userStore.isVip)
const scheduleTotalPages = computed(() => Math.max(1, Math.ceil((scheduleTotal.value || 0) / schedulePageSize)))
function normalizeTime(value) { function normalizeTime(value) {
const match = String(value || '').match(/^(\d{1,2}):(\d{2})$/) const match = String(value || '').match(/^(\d{1,2}):(\d{2})$/)
@@ -98,37 +94,17 @@ async function loadAccounts() {
} }
} }
async function reloadSchedulesAfterMutate() {
if (schedulePage.value > 1 && schedules.value.length <= 1) {
schedulePage.value -= 1
}
await loadSchedules()
}
async function onSchedulePageChange(page) {
schedulePage.value = page
await loadSchedules()
}
async function loadSchedules() { async function loadSchedules() {
loading.value = true loading.value = true
try { try {
const params = { const list = await fetchSchedules()
limit: schedulePageSize, schedules.value = (Array.isArray(list) ? list : []).map((s) => ({
offset: (schedulePage.value - 1) * schedulePageSize,
}
const payload = await fetchSchedules(params)
const rawItems = Array.isArray(payload) ? payload : (Array.isArray(payload?.items) ? payload.items : [])
const rawTotal = Array.isArray(payload) ? rawItems.length : Number(payload?.total ?? rawItems.length)
schedules.value = rawItems.map((s) => ({
...s, ...s,
browse_type: normalizeBrowseType(s?.browse_type), browse_type: normalizeBrowseType(s?.browse_type),
})) }))
scheduleTotal.value = Number.isFinite(rawTotal) ? Math.max(0, rawTotal) : rawItems.length
} catch (e) { } catch (e) {
if (e?.response?.status === 401) window.location.href = '/login' if (e?.response?.status === 401) window.location.href = '/login'
schedules.value = [] schedules.value = []
scheduleTotal.value = 0
} finally { } finally {
loading.value = false loading.value = false
} }
@@ -196,7 +172,6 @@ async function saveSchedule() {
} else { } else {
await createSchedule(payload) await createSchedule(payload)
ElMessage.success('创建成功') ElMessage.success('创建成功')
schedulePage.value = 1
} }
editorOpen.value = false editorOpen.value = false
await loadSchedules() await loadSchedules()
@@ -223,7 +198,7 @@ async function onDelete(schedule) {
const res = await deleteSchedule(schedule.id) const res = await deleteSchedule(schedule.id)
if (res?.success) { if (res?.success) {
ElMessage.success('已删除') ElMessage.success('已删除')
await reloadSchedulesAfterMutate() await loadSchedules()
} else { } else {
ElMessage.error(res?.error || '删除失败') ElMessage.error(res?.error || '删除失败')
} }
@@ -400,17 +375,6 @@ onMounted(async () => {
</div> </div>
</el-card> </el-card>
</div> </div>
<div v-if="scheduleTotal > schedulePageSize" class="pagination">
<el-pagination
v-model:current-page="schedulePage"
:page-size="schedulePageSize"
:total="scheduleTotal"
layout="prev, pager, next, jumper, ->, total"
@current-change="onSchedulePageChange"
/>
<div class="page-hint app-muted"> {{ schedulePage }} / {{ scheduleTotalPages }} </div>
</div>
</template> </template>
</el-card> </el-card>
@@ -629,19 +593,6 @@ onMounted(async () => {
flex-wrap: wrap; flex-wrap: wrap;
} }
.pagination {
margin-top: 12px;
display: flex;
align-items: center;
justify-content: space-between;
gap: 10px;
flex-wrap: wrap;
}
.page-hint {
font-size: 12px;
}
.logs { .logs {
display: flex; display: flex;
flex-direction: column; flex-direction: column;

View File

@@ -1,15 +1,11 @@
<script setup> <script setup>
import { computed, onMounted, ref } from 'vue' import { onMounted, ref } from 'vue'
import { ElMessage, ElMessageBox } from 'element-plus' import { ElMessage, ElMessageBox } from 'element-plus'
import { clearScreenshots, deleteScreenshot, fetchScreenshots } from '../api/screenshots' import { clearScreenshots, deleteScreenshot, fetchScreenshots } from '../api/screenshots'
const loading = ref(false) const loading = ref(false)
const screenshots = ref([]) const screenshots = ref([])
const currentPage = ref(1)
const total = ref(0)
const pageSize = 24
const totalPages = computed(() => Math.max(1, Math.ceil((total.value || 0) / pageSize)))
const previewOpen = ref(false) const previewOpen = ref(false)
const previewUrl = ref('') const previewUrl = ref('')
@@ -19,50 +15,32 @@ function buildUrl(filename) {
return `/screenshots/${encodeURIComponent(filename)}` return `/screenshots/${encodeURIComponent(filename)}`
} }
function buildThumbUrl(filename) {
return `/screenshots/thumb/${encodeURIComponent(filename)}`
}
async function load() { async function load() {
loading.value = true loading.value = true
try { try {
const params = { const data = await fetchScreenshots()
limit: pageSize, screenshots.value = Array.isArray(data) ? data : []
offset: (currentPage.value - 1) * pageSize,
}
const payload = await fetchScreenshots(params)
const items = Array.isArray(payload) ? payload : (Array.isArray(payload?.items) ? payload.items : [])
const payloadTotal = Array.isArray(payload) ? items.length : Number(payload?.total ?? items.length)
screenshots.value = items
total.value = Number.isFinite(payloadTotal) ? Math.max(0, payloadTotal) : items.length
} catch (e) { } catch (e) {
if (e?.response?.status === 401) window.location.href = '/login' if (e?.response?.status === 401) window.location.href = '/login'
screenshots.value = [] screenshots.value = []
total.value = 0
} finally { } finally {
loading.value = false loading.value = false
} }
} }
async function onPageChange(page) {
currentPage.value = page
await load()
}
function openPreview(item) { function openPreview(item) {
previewTitle.value = item.display_name || item.filename || '截图预览' previewTitle.value = item.display_name || item.filename || '截图预览'
previewUrl.value = buildUrl(item.filename) previewUrl.value = buildUrl(item.filename)
previewOpen.value = true previewOpen.value = true
} }
function onThumbError(event, item) { function findRenderedShotImage(filename) {
const imageEl = event?.target try {
if (!imageEl) return const escaped = typeof CSS !== 'undefined' && typeof CSS.escape === 'function' ? CSS.escape(String(filename)) : String(filename)
if (imageEl.dataset.fullLoaded === '1') return return document.querySelector(`img[data-shot-filename="${escaped}"]`)
} catch {
imageEl.dataset.fullLoaded = '1' return null
imageEl.src = buildUrl(item.filename) }
} }
function canvasToPngBlob(canvas) { function canvasToPngBlob(canvas) {
@@ -118,8 +96,17 @@ async function blobToPng(blob) {
} }
} }
async function screenshotUrlToPngBlob(url) { async function screenshotUrlToPngBlob(url, filename) {
// 复制时始终拉取原图,避免复制到缩略图 // 优先使用页面上已渲染完成的 <img>(避免额外请求;也更容易满足剪贴板“用户手势”限制)
const imgEl = findRenderedShotImage(filename)
if (imgEl) {
try {
return await imageElementToPngBlob(imgEl)
} catch {
// fallback to fetch
}
}
const resp = await fetch(url, { credentials: 'include', cache: 'no-store' }) const resp = await fetch(url, { credentials: 'include', cache: 'no-store' })
if (!resp.ok) throw new Error('fetch_failed') if (!resp.ok) throw new Error('fetch_failed')
const blob = await resp.blob() const blob = await resp.blob()
@@ -144,8 +131,6 @@ async function onClearAll() {
if (res?.success) { if (res?.success) {
ElMessage.success(`已清空(删除 ${res?.deleted || 0} 张)`) ElMessage.success(`已清空(删除 ${res?.deleted || 0} 张)`)
screenshots.value = [] screenshots.value = []
total.value = 0
currentPage.value = 1
previewOpen.value = false previewOpen.value = false
return return
} }
@@ -170,9 +155,8 @@ async function onDelete(item) {
try { try {
const res = await deleteScreenshot(item.filename) const res = await deleteScreenshot(item.filename)
if (res?.success) { if (res?.success) {
screenshots.value = screenshots.value.filter((s) => s.filename !== item.filename)
if (previewUrl.value.includes(encodeURIComponent(item.filename))) previewOpen.value = false if (previewUrl.value.includes(encodeURIComponent(item.filename))) previewOpen.value = false
if (currentPage.value > 1 && screenshots.value.length <= 1) currentPage.value -= 1
await load()
ElMessage.success('已删除') ElMessage.success('已删除')
return return
} }
@@ -199,15 +183,15 @@ async function copyImage(item) {
try { try {
await navigator.clipboard.write([ await navigator.clipboard.write([
new ClipboardItem({ new ClipboardItem({
'image/png': screenshotUrlToPngBlob(url), 'image/png': screenshotUrlToPngBlob(url, item.filename),
}), }),
]) ])
} catch { } catch {
const pngBlob = await screenshotUrlToPngBlob(url) const pngBlob = await screenshotUrlToPngBlob(url, item.filename)
await navigator.clipboard.write([new ClipboardItem({ 'image/png': pngBlob })]) await navigator.clipboard.write([new ClipboardItem({ 'image/png': pngBlob })])
} }
ElMessage.success('图片已复制到剪贴板') ElMessage.success('图片已复制到剪贴板')
} catch { } catch (e) {
try { try {
if (navigator.clipboard && typeof navigator.clipboard.writeText === 'function') { if (navigator.clipboard && typeof navigator.clipboard.writeText === 'function') {
await navigator.clipboard.writeText(`${window.location.origin}${url}`) await navigator.clipboard.writeText(`${window.location.origin}${url}`)
@@ -239,22 +223,22 @@ onMounted(load)
<div class="panel-title">截图管理</div> <div class="panel-title">截图管理</div>
<div class="panel-actions"> <div class="panel-actions">
<el-button :loading="loading" @click="load">刷新</el-button> <el-button :loading="loading" @click="load">刷新</el-button>
<el-button type="danger" plain :disabled="total === 0" @click="onClearAll">清空全部</el-button> <el-button type="danger" plain :disabled="screenshots.length === 0" @click="onClearAll">清空全部</el-button>
</div> </div>
</div> </div>
<el-skeleton v-if="loading" :rows="6" animated /> <el-skeleton v-if="loading" :rows="6" animated />
<template v-else> <template v-else>
<el-empty v-if="total === 0" description="暂无截图" /> <el-empty v-if="screenshots.length === 0" description="暂无截图" />
<div v-else class="grid"> <div v-else class="grid">
<el-card v-for="item in screenshots" :key="item.filename" shadow="never" class="shot-card" :body-style="{ padding: '0' }"> <el-card v-for="item in screenshots" :key="item.filename" shadow="never" class="shot-card" :body-style="{ padding: '0' }">
<img <img
class="shot-img" class="shot-img"
:src="buildThumbUrl(item.filename)" :src="buildUrl(item.filename)"
:alt="item.display_name || item.filename" :alt="item.display_name || item.filename"
:data-shot-filename="item.filename"
loading="lazy" loading="lazy"
@error="onThumbError($event, item)"
@click="openPreview(item)" @click="openPreview(item)"
/> />
<div class="shot-body"> <div class="shot-body">
@@ -268,17 +252,6 @@ onMounted(load)
</div> </div>
</el-card> </el-card>
</div> </div>
<div v-if="total > pageSize" class="pagination">
<el-pagination
v-model:current-page="currentPage"
:page-size="pageSize"
:total="total"
layout="prev, pager, next, jumper, ->, total"
@current-change="onPageChange"
/>
<div class="page-hint app-muted"> {{ currentPage }} / {{ totalPages }} </div>
</div>
</template> </template>
<el-dialog v-model="previewOpen" :title="previewTitle" width="min(920px, 94vw)"> <el-dialog v-model="previewOpen" :title="previewTitle" width="min(920px, 94vw)">
@@ -326,19 +299,6 @@ onMounted(load)
align-items: start; align-items: start;
} }
.pagination {
margin-top: 12px;
display: flex;
align-items: center;
justify-content: space-between;
gap: 10px;
flex-wrap: wrap;
}
.page-hint {
font-size: 12px;
}
.shot-card { .shot-card {
border-radius: 14px; border-radius: 14px;
border: 1px solid var(--app-border); border: 1px solid var(--app-border);

View File

@@ -1,10 +1,11 @@
import { createRouter, createWebHistory } from 'vue-router' import { createRouter, createWebHistory } from 'vue-router'
import AppLayout from '../layouts/AppLayout.vue'
const LoginPage = () => import('../pages/LoginPage.vue') const LoginPage = () => import('../pages/LoginPage.vue')
const RegisterPage = () => import('../pages/RegisterPage.vue') const RegisterPage = () => import('../pages/RegisterPage.vue')
const ResetPasswordPage = () => import('../pages/ResetPasswordPage.vue') const ResetPasswordPage = () => import('../pages/ResetPasswordPage.vue')
const VerifyResultPage = () => import('../pages/VerifyResultPage.vue') const VerifyResultPage = () => import('../pages/VerifyResultPage.vue')
const AppLayout = () => import('../layouts/AppLayout.vue')
const AccountsPage = () => import('../pages/AccountsPage.vue') const AccountsPage = () => import('../pages/AccountsPage.vue')
const SchedulesPage = () => import('../pages/SchedulesPage.vue') const SchedulesPage = () => import('../pages/SchedulesPage.vue')

View File

@@ -1,153 +0,0 @@
function ensurePublicKeyOptions(options) {
if (!options || typeof options !== 'object') {
throw new Error('Passkey参数无效')
}
return options.publicKey && typeof options.publicKey === 'object' ? options.publicKey : options
}
function base64UrlToUint8Array(base64url) {
const value = String(base64url || '')
const padding = '='.repeat((4 - (value.length % 4)) % 4)
const base64 = (value + padding).replace(/-/g, '+').replace(/_/g, '/')
const raw = window.atob(base64)
const bytes = new Uint8Array(raw.length)
for (let i = 0; i < raw.length; i += 1) {
bytes[i] = raw.charCodeAt(i)
}
return bytes
}
function uint8ArrayToBase64Url(input) {
const bytes = input instanceof ArrayBuffer ? new Uint8Array(input) : new Uint8Array(input || [])
let binary = ''
for (let i = 0; i < bytes.length; i += 1) {
binary += String.fromCharCode(bytes[i])
}
return window
.btoa(binary)
.replace(/\+/g, '-')
.replace(/\//g, '_')
.replace(/=+$/g, '')
}
function toCreationOptions(rawOptions) {
const options = ensurePublicKeyOptions(rawOptions)
const normalized = {
...options,
challenge: base64UrlToUint8Array(options.challenge),
user: {
...options.user,
id: base64UrlToUint8Array(options.user?.id),
},
}
if (Array.isArray(options.excludeCredentials)) {
normalized.excludeCredentials = options.excludeCredentials.map((item) => ({
...item,
id: base64UrlToUint8Array(item.id),
}))
}
return normalized
}
function toRequestOptions(rawOptions) {
const options = ensurePublicKeyOptions(rawOptions)
const normalized = {
...options,
challenge: base64UrlToUint8Array(options.challenge),
}
if (Array.isArray(options.allowCredentials)) {
normalized.allowCredentials = options.allowCredentials.map((item) => ({
...item,
id: base64UrlToUint8Array(item.id),
}))
}
return normalized
}
function serializeCredential(credential) {
if (!credential) return null
const response = credential.response || {}
const output = {
id: credential.id,
rawId: uint8ArrayToBase64Url(credential.rawId),
type: credential.type,
authenticatorAttachment: credential.authenticatorAttachment || undefined,
response: {},
}
if (response.clientDataJSON) {
output.response.clientDataJSON = uint8ArrayToBase64Url(response.clientDataJSON)
}
if (response.attestationObject) {
output.response.attestationObject = uint8ArrayToBase64Url(response.attestationObject)
}
if (response.authenticatorData) {
output.response.authenticatorData = uint8ArrayToBase64Url(response.authenticatorData)
}
if (response.signature) {
output.response.signature = uint8ArrayToBase64Url(response.signature)
}
if (response.userHandle) {
output.response.userHandle = uint8ArrayToBase64Url(response.userHandle)
} else {
output.response.userHandle = null
}
if (typeof response.getTransports === 'function') {
output.response.transports = response.getTransports() || []
}
return output
}
export function isPasskeyAvailable() {
return typeof window !== 'undefined' && window.isSecureContext && !!window.PublicKeyCredential && !!navigator.credentials
}
function isMiuiBrowser() {
const ua = String(window?.navigator?.userAgent || '')
return /MiuiBrowser|XiaoMi\/MiuiBrowser/i.test(ua)
}
export function getPasskeyClientErrorMessage(error, actionLabel = 'Passkey操作') {
const name = String(error?.name || '').trim()
const message = String(error?.message || '').trim()
if (name === 'NotAllowedError') {
return `${actionLabel}未完成(可能已取消、超时或设备未响应)`
}
if (name === 'NotReadableError') {
if (/credential manager/i.test(message) && isMiuiBrowser()) {
return '当前小米浏览器与系统凭据管理器兼容性较差,请改用系统 Chrome 或 Edge 后重试。'
}
if (/credential manager/i.test(message)) {
return '系统凭据管理器返回异常,请确认已设置系统锁屏并改用系统 Chrome/Edge 后重试。'
}
return message || `${actionLabel}失败(设备读取异常)`
}
if (name === 'SecurityError') {
return '当前环境安全策略不满足 Passkey 要求,请确认使用 HTTPS 且证书有效。'
}
return message || `${actionLabel}失败`
}
export async function createPasskey(rawOptions) {
const publicKey = toCreationOptions(rawOptions)
const credential = await navigator.credentials.create({ publicKey })
return serializeCredential(credential)
}
export async function authenticateWithPasskey(rawOptions) {
const publicKey = toRequestOptions(rawOptions)
const credential = await navigator.credentials.get({ publicKey })
return serializeCredential(credential)
}

View File

@@ -1,62 +1,13 @@
import { defineConfig } from 'vite' import { defineConfig } from 'vite'
import { fileURLToPath } from 'node:url'
import vue from '@vitejs/plugin-vue' import vue from '@vitejs/plugin-vue'
import AutoImport from 'unplugin-auto-import/vite'
import Components from 'unplugin-vue-components/vite'
import { ElementPlusResolver } from 'unplugin-vue-components/resolvers'
export default defineConfig({ export default defineConfig({
plugins: [ plugins: [vue()],
vue(),
AutoImport({
resolvers: [ElementPlusResolver({ importStyle: 'css' })],
dts: false,
}),
Components({
resolvers: [ElementPlusResolver({ importStyle: 'css' })],
dts: false,
}),
],
base: './', base: './',
build: { build: {
outDir: '../static/app', outDir: '../static/app',
emptyOutDir: true, emptyOutDir: true,
manifest: true, manifest: true,
cssCodeSplit: true,
chunkSizeWarningLimit: 800,
rollupOptions: {
input: {
app: fileURLToPath(new URL('./index.html', import.meta.url)),
login: fileURLToPath(new URL('./login.html', import.meta.url)),
},
output: {
manualChunks(id) {
if (!id.includes('node_modules')) return undefined
if (
id.includes('/node_modules/vue/') ||
id.includes('/node_modules/@vue/') ||
id.includes('/node_modules/vue-router/') ||
id.includes('/node_modules/pinia/')
) {
return 'vendor-vue'
}
if (id.includes('/node_modules/axios/')) {
return 'vendor-axios'
}
if (
id.includes('/node_modules/socket.io-client/') ||
id.includes('/node_modules/engine.io-client/') ||
id.includes('/node_modules/socket.io-parser/')
) {
return 'vendor-realtime'
}
return undefined
},
},
},
}, },
}) })

419
app.py
View File

@@ -14,13 +14,11 @@ from __future__ import annotations
import atexit import atexit
import os import os
import re
import signal import signal
import sys import sys
import threading import threading
import time
from flask import Flask, g, jsonify, redirect, request, send_from_directory, session, url_for from flask import Flask, jsonify, redirect, request, send_from_directory, session, url_for
from flask_login import LoginManager, current_user from flask_login import LoginManager, current_user
from flask_socketio import SocketIO from flask_socketio import SocketIO
@@ -36,8 +34,7 @@ from realtime.status_push import status_push_worker
from routes import register_blueprints from routes import register_blueprints
from security import init_security_middleware from security import init_security_middleware
from services.checkpoints import init_checkpoint_manager from services.checkpoints import init_checkpoint_manager
from services.maintenance import start_cleanup_scheduler, start_database_maintenance_scheduler, start_kdocs_monitor from services.maintenance import start_cleanup_scheduler, start_kdocs_monitor
from services.request_metrics import record_request_metric
from services.models import User from services.models import User
from services.runtime import init_runtime from services.runtime import init_runtime
from services.scheduler import scheduled_task_worker from services.scheduler import scheduled_task_worker
@@ -49,13 +46,12 @@ from services.tasks import get_task_scheduler
# 设置时区为中国标准时间CST, UTC+8 # 设置时区为中国标准时间CST, UTC+8
os.environ["TZ"] = "Asia/Shanghai" os.environ["TZ"] = "Asia/Shanghai"
_TZSET_ERROR = None
try: try:
import time as _time import time as _time
_time.tzset() _time.tzset()
except Exception as e: except Exception:
_TZSET_ERROR = e pass
def _sigchld_handler(signum, frame): def _sigchld_handler(signum, frame):
@@ -88,86 +84,20 @@ if not app.config.get("SECRET_KEY"):
cors_origins = os.environ.get("CORS_ALLOWED_ORIGINS", "").strip() cors_origins = os.environ.get("CORS_ALLOWED_ORIGINS", "").strip()
cors_allowed = [o.strip() for o in cors_origins.split(",") if o.strip()] if cors_origins else [] cors_allowed = [o.strip() for o in cors_origins.split(",") if o.strip()] if cors_origins else []
_socketio_preferred_mode = (os.environ.get("SOCKETIO_ASYNC_MODE", "eventlet") or "").strip().lower() socketio = SocketIO(
if _socketio_preferred_mode in {"", "auto"}: app,
_socketio_preferred_mode = None cors_allowed_origins=cors_allowed if cors_allowed else None,
async_mode="threading",
_socketio_fallback_reason = None ping_timeout=60,
try: ping_interval=25,
socketio = SocketIO( logger=False,
app, engineio_logger=False,
cors_allowed_origins=cors_allowed if cors_allowed else None, )
async_mode=_socketio_preferred_mode,
ping_timeout=60,
ping_interval=25,
logger=False,
engineio_logger=False,
)
except Exception as socketio_error:
_socketio_fallback_reason = str(socketio_error)
socketio = SocketIO(
app,
cors_allowed_origins=cors_allowed if cors_allowed else None,
async_mode="threading",
ping_timeout=60,
ping_interval=25,
logger=False,
engineio_logger=False,
)
init_logging(log_level=config.LOG_LEVEL, log_file=config.LOG_FILE) init_logging(log_level=config.LOG_LEVEL, log_file=config.LOG_FILE)
logger = get_logger("app") logger = get_logger("app")
if _TZSET_ERROR is not None:
logger.warning(f"设置时区失败,将继续使用系统默认时区: {_TZSET_ERROR}")
if _socketio_fallback_reason:
logger.warning(f"[SocketIO] 初始化失败,已回退 threading 模式: {_socketio_fallback_reason}")
logger.info(f"[SocketIO] 当前 async_mode: {socketio.async_mode}")
init_runtime(socketio=socketio, logger=logger) init_runtime(socketio=socketio, logger=logger)
_API_DIAGNOSTIC_LOG = str(os.environ.get("API_DIAGNOSTIC_LOG", "0")).strip().lower() in {
"1",
"true",
"yes",
"on",
}
_API_DIAGNOSTIC_SLOW_MS = max(0.0, float(os.environ.get("API_DIAGNOSTIC_SLOW_MS", "0") or 0.0))
def _is_api_or_health_path(path: str) -> bool:
raw = str(path or "")
return raw.startswith("/api/") or raw.startswith("/yuyx/api/") or raw == "/health"
def _request_uses_https() -> bool:
try:
if bool(request.is_secure):
return True
except Exception as e:
logger.debug(f"检查 request.is_secure 失败: {e}")
try:
forwarded_proto = str(request.headers.get("X-Forwarded-Proto", "") or "").split(",", 1)[0].strip().lower()
if forwarded_proto == "https":
return True
except Exception as e:
logger.debug(f"检查 X-Forwarded-Proto 失败: {e}")
return False
_SECURITY_RESPONSE_HEADERS = {
"X-Content-Type-Options": "nosniff",
"X-Frame-Options": "SAMEORIGIN",
"Referrer-Policy": "strict-origin-when-cross-origin",
"Permissions-Policy": "camera=(), microphone=(), geolocation=(), payment=()",
}
_SECURITY_CSP_HEADER = str(os.environ.get("SECURITY_CONTENT_SECURITY_POLICY", "") or "").strip()
_HASHED_STATIC_ASSET_RE = re.compile(r".*-[a-z0-9_-]{8,}\.(?:js|css|woff2?|ttf|svg|png|jpe?g|webp)$", re.IGNORECASE)
# 初始化安全中间件(需在其他中间件/Blueprint 之前注册) # 初始化安全中间件(需在其他中间件/Blueprint 之前注册)
init_security_middleware(app) init_security_middleware(app)
@@ -201,90 +131,33 @@ def unauthorized():
return redirect(url_for("pages.login_page", next=request.url)) return redirect(url_for("pages.login_page", next=request.url))
@app.before_request
def track_request_start_time():
g.request_start_perf = time.perf_counter()
@app.before_request @app.before_request
def enforce_csrf_protection(): def enforce_csrf_protection():
if request.method in {"GET", "HEAD", "OPTIONS"}: if request.method in {"GET", "HEAD", "OPTIONS"}:
return return
if request.path.startswith("/static/"): if request.path.startswith("/static/"):
return return
# 登录挑战相关路由豁免 CSRF会话尚未建立前需要可用 if not (current_user.is_authenticated or "admin_id" in session):
csrf_exempt_paths = {
"/yuyx/api/login",
"/api/login",
"/api/auth/login",
"/api/generate_captcha",
"/yuyx/api/passkeys/login/options",
"/yuyx/api/passkeys/login/verify",
"/api/passkeys/login/options",
"/api/passkeys/login/verify",
}
if request.path in csrf_exempt_paths:
return return
token = request.headers.get("X-CSRF-Token") or request.form.get("csrf_token") token = request.headers.get("X-CSRF-Token") or request.form.get("csrf_token")
if not token or not validate_csrf_token(token): if not token or not validate_csrf_token(token):
return jsonify({"error": "CSRF token missing or invalid"}), 403 return jsonify({"error": "CSRF token missing or invalid"}), 403
def _record_request_metric_after_response(response) -> None:
try:
started = float(getattr(g, "request_start_perf", 0.0) or 0.0)
if started <= 0:
return
duration_ms = max(0.0, (time.perf_counter() - started) * 1000.0)
path = request.path or "/"
method = request.method or "GET"
status_code = int(getattr(response, "status_code", 0) or 0)
is_api = _is_api_or_health_path(path)
record_request_metric(
path=path,
method=method,
status_code=status_code,
duration_ms=duration_ms,
is_api=is_api,
)
if _API_DIAGNOSTIC_LOG and is_api:
is_slow = _API_DIAGNOSTIC_SLOW_MS > 0 and duration_ms >= _API_DIAGNOSTIC_SLOW_MS
is_server_error = status_code >= 500
if is_slow or is_server_error:
logger.warning(
f"[API-DIAG] {method} {path} -> {status_code} ({duration_ms:.1f}ms)"
)
except Exception as e:
logger.debug(f"记录请求指标失败: {e}")
@app.after_request @app.after_request
def ensure_csrf_cookie(response): def ensure_csrf_cookie(response):
if not request.path.startswith("/static/"): if request.path.startswith("/static/"):
token = session.get("csrf_token") return response
if not token: token = session.get("csrf_token")
token = generate_csrf_token() if not token:
response.set_cookie( token = generate_csrf_token()
"csrf_token", response.set_cookie(
token, "csrf_token",
httponly=False, token,
secure=bool(config.SESSION_COOKIE_SECURE), httponly=False,
samesite=config.SESSION_COOKIE_SAMESITE, secure=bool(config.SESSION_COOKIE_SECURE),
) samesite=config.SESSION_COOKIE_SAMESITE,
)
for header_name, header_value in _SECURITY_RESPONSE_HEADERS.items():
response.headers.setdefault(header_name, header_value)
if _request_uses_https():
response.headers.setdefault("Strict-Transport-Security", "max-age=31536000; includeSubDomains")
if _SECURITY_CSP_HEADER:
response.headers.setdefault("Content-Security-Policy", _SECURITY_CSP_HEADER)
_record_request_metric_after_response(response)
return response return response
@@ -296,38 +169,10 @@ def serve_static(filename):
if not is_safe_path("static", filename): if not is_safe_path("static", filename):
return jsonify({"error": "非法路径"}), 403 return jsonify({"error": "非法路径"}), 403
lowered = filename.lower() response = send_from_directory("static", filename)
is_asset_file = "/assets/" in lowered or lowered.endswith((".js", ".css", ".woff", ".woff2", ".ttf", ".svg")) response.headers["Cache-Control"] = "no-store, no-cache, must-revalidate, max-age=0"
is_hashed_asset = bool(_HASHED_STATIC_ASSET_RE.match(lowered)) response.headers["Pragma"] = "no-cache"
response.headers["Expires"] = "0"
cache_ttl = 3600
if is_asset_file:
cache_ttl = 604800 # 7天
if is_hashed_asset:
cache_ttl = 31536000 # 365天
if request.args.get("v"):
cache_ttl = max(cache_ttl, 604800)
response = send_from_directory("static", filename, max_age=cache_ttl, conditional=True)
# 协商缓存:确保存在 ETag并基于 If-None-Match/If-Modified-Since 返回 304
try:
response.add_etag(overwrite=False)
except Exception as e:
logger.debug(f"静态资源 ETag 设置失败({filename}): {e}")
try:
response.make_conditional(request)
except Exception as e:
logger.debug(f"静态资源协商缓存处理失败({filename}): {e}")
response.headers.setdefault("Vary", "Accept-Encoding")
if is_hashed_asset:
response.headers["Cache-Control"] = f"public, max-age={cache_ttl}, immutable"
elif is_asset_file:
response.headers["Cache-Control"] = f"public, max-age={cache_ttl}, stale-while-revalidate=60"
else:
response.headers["Cache-Control"] = f"public, max-age={cache_ttl}"
return response return response
@@ -343,35 +188,35 @@ def cleanup_on_exit():
for acc in accounts.values(): for acc in accounts.values():
if getattr(acc, "is_running", False): if getattr(acc, "is_running", False):
acc.should_stop = True acc.should_stop = True
except Exception as e: except Exception:
logger.warning(f"停止运行中任务失败: {e}") pass
logger.info("- 停止任务调度器...") logger.info("- 停止任务调度器...")
try: try:
scheduler = get_task_scheduler() scheduler = get_task_scheduler()
scheduler.shutdown(timeout=5) scheduler.shutdown(timeout=5)
except Exception as e: except Exception:
logger.warning(f"停止任务调度器失败: {e}") pass
logger.info("- 关闭截图线程池...") logger.info("- 关闭截图线程池...")
try: try:
shutdown_browser_worker_pool() shutdown_browser_worker_pool()
except Exception as e: except Exception:
logger.warning(f"关闭截图线程池失败: {e}") pass
logger.info("- 关闭邮件队列...") logger.info("- 关闭邮件队列...")
try: try:
email_service.shutdown_email_queue() email_service.shutdown_email_queue()
except Exception as e: except Exception:
logger.warning(f"关闭邮件队列失败: {e}") pass
logger.info("- 关闭数据库连接池...") logger.info("- 关闭数据库连接池...")
try: try:
db_pool._pool.close_all() if db_pool._pool else None db_pool._pool.close_all() if db_pool._pool else None
except Exception as e: except Exception:
logger.warning(f"关闭数据库连接池失败: {e}") pass
logger.info("[OK] 资源清理完成") logger.info(" 资源清理完成")
# ==================== 启动入口(保持 python app.py 可用) ==================== # ==================== 启动入口(保持 python app.py 可用) ====================
@@ -383,93 +228,6 @@ def _signal_handler(sig, frame):
sys.exit(0) sys.exit(0)
def _cleanup_stale_task_state() -> None:
logger.info("清理遗留任务状态...")
try:
from services.state import safe_get_active_task_ids, safe_remove_task, safe_remove_task_status
for _, accounts in safe_iter_user_accounts_items():
for acc in accounts.values():
if not getattr(acc, "is_running", False):
continue
acc.is_running = False
acc.should_stop = False
acc.status = "未开始"
for account_id in list(safe_get_active_task_ids()):
safe_remove_task(account_id)
safe_remove_task_status(account_id)
logger.info("[OK] 遗留任务状态已清理")
except Exception as e:
logger.warning(f"清理遗留任务状态失败: {e}")
def _init_optional_email_service() -> None:
try:
email_service.init_email_service()
logger.info("[OK] 邮件服务已初始化")
except Exception as e:
logger.warning(f"警告: 邮件服务初始化失败: {e}")
def _load_and_apply_scheduler_limits() -> None:
try:
system_config = database.get_system_config() or {}
max_concurrent_global = int(system_config.get("max_concurrent_global", config.MAX_CONCURRENT_GLOBAL))
max_concurrent_per_account = int(system_config.get("max_concurrent_per_account", config.MAX_CONCURRENT_PER_ACCOUNT))
get_task_scheduler().update_limits(max_global=max_concurrent_global, max_per_user=max_concurrent_per_account)
logger.info(f"[OK] 已加载并发配置: 全局={max_concurrent_global}, 单账号={max_concurrent_per_account}")
except Exception as e:
logger.warning(f"警告: 加载并发配置失败,使用默认值: {e}")
def _start_background_workers() -> None:
logger.info("启动定时任务调度器...")
threading.Thread(target=scheduled_task_worker, daemon=True, name="scheduled-task-worker").start()
logger.info("[OK] 定时任务调度器已启动")
logger.info("[OK] 状态推送线程已启动默认2秒/次)")
threading.Thread(target=status_push_worker, daemon=True, name="status-push-worker").start()
def _init_screenshot_worker_pool() -> None:
try:
pool_size = int((database.get_system_config() or {}).get("max_screenshot_concurrent", 3))
except Exception:
pool_size = 3
try:
logger.info(f"初始化截图线程池({pool_size}个worker按需启动执行环境空闲5分钟后自动释放...")
init_browser_worker_pool(pool_size=pool_size)
logger.info("[OK] 截图线程池初始化完成")
except Exception as e:
logger.warning(f"警告: 截图线程池初始化失败: {e}")
def _warmup_api_connection() -> None:
logger.info("预热 API 连接...")
try:
from api_browser import warmup_api_connection
threading.Thread(
target=warmup_api_connection,
kwargs={"log_callback": lambda msg: logger.info(msg)},
daemon=True,
name="api-warmup",
).start()
except Exception as e:
logger.warning(f"API 预热失败: {e}")
def _log_startup_urls() -> None:
logger.info("服务器启动中...")
logger.info(f"用户访问地址: http://{config.SERVER_HOST}:{config.SERVER_PORT}")
logger.info(f"后台管理地址: http://{config.SERVER_HOST}:{config.SERVER_PORT}/yuyx")
logger.info("默认管理员: admin (首次运行密码写入 data/default_admin_credentials.txt)")
logger.info("=" * 60)
if __name__ == "__main__": if __name__ == "__main__":
atexit.register(cleanup_on_exit) atexit.register(cleanup_on_exit)
signal.signal(signal.SIGINT, _signal_handler) signal.signal(signal.SIGINT, _signal_handler)
@@ -481,27 +239,88 @@ if __name__ == "__main__":
database.init_database() database.init_database()
init_checkpoint_manager() init_checkpoint_manager()
logger.info("[OK] 任务断点管理器已初始化") logger.info(" 任务断点管理器已初始化")
_cleanup_stale_task_state() # 【新增】容器重启时清理遗留的任务状态
_init_optional_email_service() logger.info("清理遗留任务状态...")
try:
from services.state import safe_remove_task, safe_get_active_task_ids, safe_remove_task_status
# 重置所有账号的运行状态
for _, accounts in safe_iter_user_accounts_items():
for acc in accounts.values():
if getattr(acc, "is_running", False):
acc.is_running = False
acc.should_stop = False
acc.status = "未开始"
# 清理活跃任务句柄
for account_id in list(safe_get_active_task_ids()):
safe_remove_task(account_id)
safe_remove_task_status(account_id)
logger.info("✓ 遗留任务状态已清理")
except Exception as e:
logger.warning(f"清理遗留任务状态失败: {e}")
try:
email_service.init_email_service()
logger.info("✓ 邮件服务已初始化")
except Exception as e:
logger.warning(f"警告: 邮件服务初始化失败: {e}")
start_cleanup_scheduler() start_cleanup_scheduler()
start_database_maintenance_scheduler()
start_kdocs_monitor() start_kdocs_monitor()
_load_and_apply_scheduler_limits() try:
_start_background_workers() system_config = database.get_system_config() or {}
_log_startup_urls() max_concurrent_global = int(system_config.get("max_concurrent_global", config.MAX_CONCURRENT_GLOBAL))
_init_screenshot_worker_pool() max_concurrent_per_account = int(system_config.get("max_concurrent_per_account", config.MAX_CONCURRENT_PER_ACCOUNT))
_warmup_api_connection() get_task_scheduler().update_limits(max_global=max_concurrent_global, max_per_user=max_concurrent_per_account)
logger.info(f"✓ 已加载并发配置: 全局={max_concurrent_global}, 单账号={max_concurrent_per_account}")
except Exception as e:
logger.warning(f"警告: 加载并发配置失败,使用默认值: {e}")
run_kwargs = { logger.info("启动定时任务调度器...")
"host": config.SERVER_HOST, threading.Thread(target=scheduled_task_worker, daemon=True, name="scheduled-task-worker").start()
"port": config.SERVER_PORT, logger.info("✓ 定时任务调度器已启动")
"debug": config.DEBUG,
}
if str(socketio.async_mode) == "threading":
run_kwargs["allow_unsafe_werkzeug"] = True
socketio.run(app, **run_kwargs) logger.info("✓ 状态推送线程已启动默认2秒/次)")
threading.Thread(target=status_push_worker, daemon=True, name="status-push-worker").start()
logger.info("服务器启动中...")
logger.info(f"用户访问地址: http://{config.SERVER_HOST}:{config.SERVER_PORT}")
logger.info(f"后台管理地址: http://{config.SERVER_HOST}:{config.SERVER_PORT}/yuyx")
logger.info("默认管理员: admin (首次运行随机密码见日志)")
logger.info("=" * 60)
try:
pool_size = int((database.get_system_config() or {}).get("max_screenshot_concurrent", 3))
except Exception:
pool_size = 3
try:
logger.info(f"初始化截图线程池({pool_size}个worker按需启动执行环境空闲5分钟后自动释放...")
init_browser_worker_pool(pool_size=pool_size)
logger.info("✓ 截图线程池初始化完成")
except Exception as e:
logger.warning(f"警告: 截图线程池初始化失败: {e}")
# 预热 API 连接(后台进行,不阻塞启动)
logger.info("预热 API 连接...")
try:
from api_browser import warmup_api_connection
import threading
threading.Thread(
target=warmup_api_connection,
kwargs={"log_callback": lambda msg: logger.info(msg)},
daemon=True,
name="api-warmup",
).start()
except Exception as e:
logger.warning(f"API 预热失败: {e}")
socketio.run(
app,
host=config.SERVER_HOST,
port=config.SERVER_PORT,
debug=config.DEBUG,
allow_unsafe_werkzeug=True,
)

View File

@@ -14,62 +14,38 @@ from urllib.parse import urlsplit, urlunsplit
# Bug fix: 添加警告日志,避免静默失败 # Bug fix: 添加警告日志,避免静默失败
try: try:
from dotenv import load_dotenv from dotenv import load_dotenv
env_path = Path(__file__).parent / '.env'
env_path = Path(__file__).parent / ".env"
if env_path.exists(): if env_path.exists():
load_dotenv(dotenv_path=env_path) load_dotenv(dotenv_path=env_path)
print(f"[OK] 已加载环境变量文件: {env_path}") print(f" 已加载环境变量文件: {env_path}")
except ImportError: except ImportError:
# python-dotenv未安装记录警告 # python-dotenv未安装记录警告
import sys import sys
print("⚠ 警告: python-dotenv未安装将不会加载.env文件。如需使用.env文件请运行: pip install python-dotenv", file=sys.stderr)
print(
"⚠ 警告: python-dotenv未安装将不会加载.env文件。如需使用.env文件请运行: pip install python-dotenv",
file=sys.stderr,
)
# 常量定义 # 常量定义
SECRET_KEY_FILE = "data/secret_key.txt" SECRET_KEY_FILE = 'data/secret_key.txt'
def _ensure_private_dir(path: str) -> None:
if not path:
return
os.makedirs(path, mode=0o700, exist_ok=True)
try:
os.chmod(path, 0o700)
except Exception:
pass
def _ensure_private_file(path: str) -> None:
try:
os.chmod(path, 0o600)
except Exception:
pass
def get_secret_key(): def get_secret_key():
"""获取SECRET_KEY优先环境变量""" """获取SECRET_KEY优先环境变量"""
# 优先从环境变量读取 # 优先从环境变量读取
secret_key = os.environ.get("SECRET_KEY") secret_key = os.environ.get('SECRET_KEY')
if secret_key: if secret_key:
return secret_key return secret_key
# 从文件读取 # 从文件读取
if os.path.exists(SECRET_KEY_FILE): if os.path.exists(SECRET_KEY_FILE):
_ensure_private_file(SECRET_KEY_FILE) with open(SECRET_KEY_FILE, 'r') as f:
with open(SECRET_KEY_FILE, "r") as f:
return f.read().strip() return f.read().strip()
# 生成新的 # 生成新的
new_key = os.urandom(24).hex() new_key = os.urandom(24).hex()
_ensure_private_dir("data") os.makedirs('data', exist_ok=True)
with open(SECRET_KEY_FILE, "w") as f: with open(SECRET_KEY_FILE, 'w') as f:
f.write(new_key) f.write(new_key)
_ensure_private_file(SECRET_KEY_FILE) print(f"✓ 已生成新的SECRET_KEY并保存到 {SECRET_KEY_FILE}")
print(f"[OK] 已生成新的SECRET_KEY并保存到 {SECRET_KEY_FILE}")
return new_key return new_key
@@ -109,30 +85,27 @@ class Config:
# ==================== 会话安全配置 ==================== # ==================== 会话安全配置 ====================
# 安全修复: 根据环境自动选择安全配置 # 安全修复: 根据环境自动选择安全配置
# 生产环境(FLASK_ENV=production)时自动启用更严格的安全设置 # 生产环境(FLASK_ENV=production)时自动启用更严格的安全设置
_is_production = os.environ.get("FLASK_ENV", "production") == "production" _is_production = os.environ.get('FLASK_ENV', 'production') == 'production'
_force_secure = os.environ.get("SESSION_COOKIE_SECURE", "").lower() == "true" _force_secure = os.environ.get('SESSION_COOKIE_SECURE', '').lower() == 'true'
SESSION_COOKIE_SECURE = _force_secure or ( SESSION_COOKIE_SECURE = _force_secure or (_is_production and os.environ.get('HTTPS_ENABLED', 'false').lower() == 'true')
_is_production and os.environ.get("HTTPS_ENABLED", "false").lower() == "true"
)
SESSION_COOKIE_HTTPONLY = True # 防止XSS攻击 SESSION_COOKIE_HTTPONLY = True # 防止XSS攻击
# SameSite配置HTTPS环境使用NoneHTTP环境使用Lax # SameSite配置HTTPS环境使用NoneHTTP环境使用Lax
SESSION_COOKIE_SAMESITE = "None" if SESSION_COOKIE_SECURE else "Lax" SESSION_COOKIE_SAMESITE = 'None' if SESSION_COOKIE_SECURE else 'Lax'
# 自定义cookie名称避免与其他应用冲突 # 自定义cookie名称避免与其他应用冲突
SESSION_COOKIE_NAME = os.environ.get("SESSION_COOKIE_NAME", "zsglpt_session") SESSION_COOKIE_NAME = os.environ.get('SESSION_COOKIE_NAME', 'zsglpt_session')
# Cookie路径确保整个应用都能访问 # Cookie路径确保整个应用都能访问
SESSION_COOKIE_PATH = "/" SESSION_COOKIE_PATH = '/'
PERMANENT_SESSION_LIFETIME = timedelta(hours=int(os.environ.get("SESSION_LIFETIME_HOURS", "24"))) PERMANENT_SESSION_LIFETIME = timedelta(hours=int(os.environ.get('SESSION_LIFETIME_HOURS', '24')))
# 安全警告检查 # 安全警告检查
@classmethod @classmethod
def check_security_warnings(cls): def check_security_warnings(cls):
"""检查安全配置,输出警告""" """检查安全配置,输出警告"""
import sys import sys
warnings = [] warnings = []
env = os.environ.get("FLASK_ENV", "production") env = os.environ.get('FLASK_ENV', 'production')
if env == "production": if env == 'production':
if not cls.SESSION_COOKIE_SECURE: if not cls.SESSION_COOKIE_SECURE:
warnings.append("SESSION_COOKIE_SECURE=False: 生产环境建议启用HTTPS并设置SESSION_COOKIE_SECURE=true") warnings.append("SESSION_COOKIE_SECURE=False: 生产环境建议启用HTTPS并设置SESSION_COOKIE_SECURE=true")
@@ -143,125 +116,106 @@ class Config:
print("", file=sys.stderr) print("", file=sys.stderr)
# ==================== 数据库配置 ==================== # ==================== 数据库配置 ====================
DB_FILE = os.environ.get("DB_FILE", "data/app_data.db") DB_FILE = os.environ.get('DB_FILE', 'data/app_data.db')
DB_POOL_SIZE = int(os.environ.get("DB_POOL_SIZE", "5")) DB_POOL_SIZE = int(os.environ.get('DB_POOL_SIZE', '5'))
DB_CONNECT_TIMEOUT_SECONDS = int(os.environ.get("DB_CONNECT_TIMEOUT_SECONDS", "10"))
DB_BUSY_TIMEOUT_MS = int(os.environ.get("DB_BUSY_TIMEOUT_MS", "10000"))
DB_CACHE_SIZE_KB = int(os.environ.get("DB_CACHE_SIZE_KB", "8192"))
DB_WAL_AUTOCHECKPOINT_PAGES = int(os.environ.get("DB_WAL_AUTOCHECKPOINT_PAGES", "1000"))
DB_MMAP_SIZE_MB = int(os.environ.get("DB_MMAP_SIZE_MB", "256"))
DB_LOCK_RETRY_COUNT = int(os.environ.get("DB_LOCK_RETRY_COUNT", "3"))
DB_LOCK_RETRY_BASE_MS = int(os.environ.get("DB_LOCK_RETRY_BASE_MS", "50"))
DB_SLOW_QUERY_MS = int(os.environ.get("DB_SLOW_QUERY_MS", "120"))
DB_SLOW_QUERY_SQL_MAX_LEN = int(os.environ.get("DB_SLOW_QUERY_SQL_MAX_LEN", "240"))
DB_SLOW_SQL_WINDOW_SECONDS = int(os.environ.get("DB_SLOW_SQL_WINDOW_SECONDS", "86400"))
DB_SLOW_SQL_TOP_LIMIT = int(os.environ.get("DB_SLOW_SQL_TOP_LIMIT", "12"))
DB_SLOW_SQL_RECENT_LIMIT = int(os.environ.get("DB_SLOW_SQL_RECENT_LIMIT", "50"))
DB_SLOW_SQL_MAX_EVENTS = int(os.environ.get("DB_SLOW_SQL_MAX_EVENTS", "20000"))
DB_PRAGMA_OPTIMIZE_INTERVAL_SECONDS = int(os.environ.get("DB_PRAGMA_OPTIMIZE_INTERVAL_SECONDS", "21600"))
DB_ANALYZE_INTERVAL_SECONDS = int(os.environ.get("DB_ANALYZE_INTERVAL_SECONDS", "86400"))
DB_WAL_CHECKPOINT_INTERVAL_SECONDS = int(os.environ.get("DB_WAL_CHECKPOINT_INTERVAL_SECONDS", "43200"))
DB_WAL_CHECKPOINT_MODE = os.environ.get("DB_WAL_CHECKPOINT_MODE", "PASSIVE")
# ==================== 浏览器配置 ==================== # ==================== 浏览器配置 ====================
SCREENSHOTS_DIR = os.environ.get("SCREENSHOTS_DIR", "截图") SCREENSHOTS_DIR = os.environ.get('SCREENSHOTS_DIR', '截图')
COOKIES_DIR = os.environ.get("COOKIES_DIR", "data/cookies") COOKIES_DIR = os.environ.get('COOKIES_DIR', 'data/cookies')
KDOCS_LOGIN_STATE_FILE = os.environ.get("KDOCS_LOGIN_STATE_FILE", "data/kdocs_login_state.json") KDOCS_LOGIN_STATE_FILE = os.environ.get('KDOCS_LOGIN_STATE_FILE', 'data/kdocs_login_state.json')
# ==================== 公告图片上传配置 ==================== # ==================== 公告图片上传配置 ====================
ANNOUNCEMENT_IMAGE_DIR = os.environ.get("ANNOUNCEMENT_IMAGE_DIR", "static/announcements") ANNOUNCEMENT_IMAGE_DIR = os.environ.get('ANNOUNCEMENT_IMAGE_DIR', 'static/announcements')
ALLOWED_ANNOUNCEMENT_IMAGE_EXTENSIONS = {".png", ".jpg", ".jpeg", ".gif", ".webp"} ALLOWED_ANNOUNCEMENT_IMAGE_EXTENSIONS = {'.png', '.jpg', '.jpeg', '.gif', '.webp'}
MAX_ANNOUNCEMENT_IMAGE_SIZE = int(os.environ.get("MAX_ANNOUNCEMENT_IMAGE_SIZE", "5242880")) # 5MB MAX_ANNOUNCEMENT_IMAGE_SIZE = int(os.environ.get('MAX_ANNOUNCEMENT_IMAGE_SIZE', '5242880')) # 5MB
# ==================== 并发控制配置 ==================== # ==================== 并发控制配置 ====================
MAX_CONCURRENT_GLOBAL = int(os.environ.get("MAX_CONCURRENT_GLOBAL", "2")) MAX_CONCURRENT_GLOBAL = int(os.environ.get('MAX_CONCURRENT_GLOBAL', '2'))
MAX_CONCURRENT_PER_ACCOUNT = int(os.environ.get("MAX_CONCURRENT_PER_ACCOUNT", "1")) MAX_CONCURRENT_PER_ACCOUNT = int(os.environ.get('MAX_CONCURRENT_PER_ACCOUNT', '1'))
# ==================== 日志缓存配置 ==================== # ==================== 日志缓存配置 ====================
MAX_LOGS_PER_USER = int(os.environ.get("MAX_LOGS_PER_USER", "100")) MAX_LOGS_PER_USER = int(os.environ.get('MAX_LOGS_PER_USER', '100'))
MAX_TOTAL_LOGS = int(os.environ.get("MAX_TOTAL_LOGS", "1000")) MAX_TOTAL_LOGS = int(os.environ.get('MAX_TOTAL_LOGS', '1000'))
# ==================== 内存/缓存清理配置 ==================== # ==================== 内存/缓存清理配置 ====================
USER_ACCOUNTS_EXPIRE_SECONDS = int(os.environ.get("USER_ACCOUNTS_EXPIRE_SECONDS", "3600")) USER_ACCOUNTS_EXPIRE_SECONDS = int(os.environ.get('USER_ACCOUNTS_EXPIRE_SECONDS', '3600'))
BATCH_TASK_EXPIRE_SECONDS = int(os.environ.get("BATCH_TASK_EXPIRE_SECONDS", "21600")) # 默认6小时 BATCH_TASK_EXPIRE_SECONDS = int(os.environ.get('BATCH_TASK_EXPIRE_SECONDS', '21600')) # 默认6小时
PENDING_RANDOM_EXPIRE_SECONDS = int(os.environ.get("PENDING_RANDOM_EXPIRE_SECONDS", "7200")) # 默认2小时 PENDING_RANDOM_EXPIRE_SECONDS = int(os.environ.get('PENDING_RANDOM_EXPIRE_SECONDS', '7200')) # 默认2小时
# ==================== 验证码配置 ==================== # ==================== 验证码配置 ====================
MAX_CAPTCHA_ATTEMPTS = int(os.environ.get("MAX_CAPTCHA_ATTEMPTS", "5")) MAX_CAPTCHA_ATTEMPTS = int(os.environ.get('MAX_CAPTCHA_ATTEMPTS', '5'))
CAPTCHA_EXPIRE_SECONDS = int(os.environ.get("CAPTCHA_EXPIRE_SECONDS", "300")) CAPTCHA_EXPIRE_SECONDS = int(os.environ.get('CAPTCHA_EXPIRE_SECONDS', '300'))
# ==================== IP限流配置 ==================== # ==================== IP限流配置 ====================
MAX_IP_ATTEMPTS_PER_HOUR = int(os.environ.get("MAX_IP_ATTEMPTS_PER_HOUR", "10")) MAX_IP_ATTEMPTS_PER_HOUR = int(os.environ.get('MAX_IP_ATTEMPTS_PER_HOUR', '10'))
IP_LOCK_DURATION = int(os.environ.get("IP_LOCK_DURATION", "3600")) # 秒 IP_LOCK_DURATION = int(os.environ.get('IP_LOCK_DURATION', '3600')) # 秒
IP_RATE_LIMIT_LOGIN_MAX = int(os.environ.get("IP_RATE_LIMIT_LOGIN_MAX", "20")) IP_RATE_LIMIT_LOGIN_MAX = int(os.environ.get('IP_RATE_LIMIT_LOGIN_MAX', '20'))
IP_RATE_LIMIT_LOGIN_WINDOW_SECONDS = int(os.environ.get("IP_RATE_LIMIT_LOGIN_WINDOW_SECONDS", "60")) IP_RATE_LIMIT_LOGIN_WINDOW_SECONDS = int(os.environ.get('IP_RATE_LIMIT_LOGIN_WINDOW_SECONDS', '60'))
IP_RATE_LIMIT_REGISTER_MAX = int(os.environ.get("IP_RATE_LIMIT_REGISTER_MAX", "10")) IP_RATE_LIMIT_REGISTER_MAX = int(os.environ.get('IP_RATE_LIMIT_REGISTER_MAX', '10'))
IP_RATE_LIMIT_REGISTER_WINDOW_SECONDS = int(os.environ.get("IP_RATE_LIMIT_REGISTER_WINDOW_SECONDS", "3600")) IP_RATE_LIMIT_REGISTER_WINDOW_SECONDS = int(os.environ.get('IP_RATE_LIMIT_REGISTER_WINDOW_SECONDS', '3600'))
IP_RATE_LIMIT_EMAIL_MAX = int(os.environ.get("IP_RATE_LIMIT_EMAIL_MAX", "20")) IP_RATE_LIMIT_EMAIL_MAX = int(os.environ.get('IP_RATE_LIMIT_EMAIL_MAX', '20'))
IP_RATE_LIMIT_EMAIL_WINDOW_SECONDS = int(os.environ.get("IP_RATE_LIMIT_EMAIL_WINDOW_SECONDS", "3600")) IP_RATE_LIMIT_EMAIL_WINDOW_SECONDS = int(os.environ.get('IP_RATE_LIMIT_EMAIL_WINDOW_SECONDS', '3600'))
# ==================== 超时配置 ==================== # ==================== 超时配置 ====================
PAGE_LOAD_TIMEOUT = int(os.environ.get("PAGE_LOAD_TIMEOUT", "60000")) # 毫秒 PAGE_LOAD_TIMEOUT = int(os.environ.get('PAGE_LOAD_TIMEOUT', '60000')) # 毫秒
DEFAULT_TIMEOUT = int(os.environ.get("DEFAULT_TIMEOUT", "60000")) # 毫秒 DEFAULT_TIMEOUT = int(os.environ.get('DEFAULT_TIMEOUT', '60000')) # 毫秒
# ==================== 知识管理平台配置 ==================== # ==================== 知识管理平台配置 ====================
ZSGL_LOGIN_URL = os.environ.get("ZSGL_LOGIN_URL", "https://postoa.aidunsoft.com/admin/login.aspx") ZSGL_LOGIN_URL = os.environ.get('ZSGL_LOGIN_URL', 'https://postoa.aidunsoft.com/admin/login.aspx')
ZSGL_INDEX_URL_PATTERN = os.environ.get("ZSGL_INDEX_URL_PATTERN", "index.aspx") ZSGL_INDEX_URL_PATTERN = os.environ.get('ZSGL_INDEX_URL_PATTERN', 'index.aspx')
ZSGL_BASE_URL = os.environ.get("ZSGL_BASE_URL") or _derive_base_url_from_full_url( ZSGL_BASE_URL = os.environ.get('ZSGL_BASE_URL') or _derive_base_url_from_full_url(ZSGL_LOGIN_URL, 'https://postoa.aidunsoft.com')
ZSGL_LOGIN_URL, "https://postoa.aidunsoft.com" ZSGL_INDEX_URL = os.environ.get('ZSGL_INDEX_URL') or _derive_sibling_url(
)
ZSGL_INDEX_URL = os.environ.get("ZSGL_INDEX_URL") or _derive_sibling_url(
ZSGL_LOGIN_URL, ZSGL_LOGIN_URL,
ZSGL_INDEX_URL_PATTERN, ZSGL_INDEX_URL_PATTERN,
f"{ZSGL_BASE_URL}/admin/{ZSGL_INDEX_URL_PATTERN}", f"{ZSGL_BASE_URL}/admin/{ZSGL_INDEX_URL_PATTERN}",
) )
MAX_CONCURRENT_CONTEXTS = int(os.environ.get("MAX_CONCURRENT_CONTEXTS", "100")) MAX_CONCURRENT_CONTEXTS = int(os.environ.get('MAX_CONCURRENT_CONTEXTS', '100'))
# ==================== 服务器配置 ==================== # ==================== 服务器配置 ====================
SERVER_HOST = os.environ.get("SERVER_HOST", "0.0.0.0") SERVER_HOST = os.environ.get('SERVER_HOST', '0.0.0.0')
SERVER_PORT = int(os.environ.get("SERVER_PORT", "51233")) SERVER_PORT = int(os.environ.get('SERVER_PORT', '51233'))
# ==================== SocketIO配置 ==================== # ==================== SocketIO配置 ====================
SOCKETIO_CORS_ALLOWED_ORIGINS = os.environ.get("SOCKETIO_CORS_ALLOWED_ORIGINS", "") SOCKETIO_CORS_ALLOWED_ORIGINS = os.environ.get('SOCKETIO_CORS_ALLOWED_ORIGINS', '*')
# ==================== 网站基础URL配置 ==================== # ==================== 网站基础URL配置 ====================
# 用于生成邮件中的验证链接等 # 用于生成邮件中的验证链接等
BASE_URL = os.environ.get("BASE_URL", "http://localhost:51233") BASE_URL = os.environ.get('BASE_URL', 'http://localhost:51233')
# ==================== 日志配置 ==================== # ==================== 日志配置 ====================
# 安全修复: 生产环境默认使用INFO级别避免泄露敏感调试信息 # 安全修复: 生产环境默认使用INFO级别避免泄露敏感调试信息
LOG_LEVEL = os.environ.get("LOG_LEVEL", "INFO") LOG_LEVEL = os.environ.get('LOG_LEVEL', 'INFO')
LOG_FILE = os.environ.get("LOG_FILE", "logs/app.log") LOG_FILE = os.environ.get('LOG_FILE', 'logs/app.log')
LOG_MAX_BYTES = int(os.environ.get("LOG_MAX_BYTES", "10485760")) # 10MB LOG_MAX_BYTES = int(os.environ.get('LOG_MAX_BYTES', '10485760')) # 10MB
LOG_BACKUP_COUNT = int(os.environ.get("LOG_BACKUP_COUNT", "5")) LOG_BACKUP_COUNT = int(os.environ.get('LOG_BACKUP_COUNT', '5'))
# ==================== 安全配置 ==================== # ==================== 安全配置 ====================
DEBUG = os.environ.get("FLASK_DEBUG", "False").lower() == "true" DEBUG = os.environ.get('FLASK_DEBUG', 'False').lower() == 'true'
ALLOWED_SCREENSHOT_EXTENSIONS = {".png", ".jpg", ".jpeg"} ALLOWED_SCREENSHOT_EXTENSIONS = {'.png', '.jpg', '.jpeg'}
MAX_SCREENSHOT_SIZE = int(os.environ.get("MAX_SCREENSHOT_SIZE", "10485760")) # 10MB MAX_SCREENSHOT_SIZE = int(os.environ.get('MAX_SCREENSHOT_SIZE', '10485760')) # 10MB
LOGIN_CAPTCHA_AFTER_FAILURES = int(os.environ.get("LOGIN_CAPTCHA_AFTER_FAILURES", "3")) LOGIN_CAPTCHA_AFTER_FAILURES = int(os.environ.get('LOGIN_CAPTCHA_AFTER_FAILURES', '3'))
LOGIN_CAPTCHA_WINDOW_SECONDS = int(os.environ.get("LOGIN_CAPTCHA_WINDOW_SECONDS", "900")) LOGIN_CAPTCHA_WINDOW_SECONDS = int(os.environ.get('LOGIN_CAPTCHA_WINDOW_SECONDS', '900'))
LOGIN_RATE_LIMIT_WINDOW_SECONDS = int(os.environ.get("LOGIN_RATE_LIMIT_WINDOW_SECONDS", "900")) LOGIN_RATE_LIMIT_WINDOW_SECONDS = int(os.environ.get('LOGIN_RATE_LIMIT_WINDOW_SECONDS', '900'))
LOGIN_IP_MAX_ATTEMPTS = int(os.environ.get("LOGIN_IP_MAX_ATTEMPTS", "60")) LOGIN_IP_MAX_ATTEMPTS = int(os.environ.get('LOGIN_IP_MAX_ATTEMPTS', '60'))
LOGIN_USERNAME_MAX_ATTEMPTS = int(os.environ.get("LOGIN_USERNAME_MAX_ATTEMPTS", "30")) LOGIN_USERNAME_MAX_ATTEMPTS = int(os.environ.get('LOGIN_USERNAME_MAX_ATTEMPTS', '30'))
LOGIN_IP_USERNAME_MAX_ATTEMPTS = int(os.environ.get("LOGIN_IP_USERNAME_MAX_ATTEMPTS", "12")) LOGIN_IP_USERNAME_MAX_ATTEMPTS = int(os.environ.get('LOGIN_IP_USERNAME_MAX_ATTEMPTS', '12'))
LOGIN_FAIL_DELAY_BASE_MS = int(os.environ.get("LOGIN_FAIL_DELAY_BASE_MS", "200")) LOGIN_FAIL_DELAY_BASE_MS = int(os.environ.get('LOGIN_FAIL_DELAY_BASE_MS', '200'))
LOGIN_FAIL_DELAY_MAX_MS = int(os.environ.get("LOGIN_FAIL_DELAY_MAX_MS", "1200")) LOGIN_FAIL_DELAY_MAX_MS = int(os.environ.get('LOGIN_FAIL_DELAY_MAX_MS', '1200'))
LOGIN_ACCOUNT_LOCK_FAILURES = int(os.environ.get("LOGIN_ACCOUNT_LOCK_FAILURES", "6")) LOGIN_ACCOUNT_LOCK_FAILURES = int(os.environ.get('LOGIN_ACCOUNT_LOCK_FAILURES', '6'))
LOGIN_ACCOUNT_LOCK_WINDOW_SECONDS = int(os.environ.get("LOGIN_ACCOUNT_LOCK_WINDOW_SECONDS", "900")) LOGIN_ACCOUNT_LOCK_WINDOW_SECONDS = int(os.environ.get('LOGIN_ACCOUNT_LOCK_WINDOW_SECONDS', '900'))
LOGIN_ACCOUNT_LOCK_SECONDS = int(os.environ.get("LOGIN_ACCOUNT_LOCK_SECONDS", "600")) LOGIN_ACCOUNT_LOCK_SECONDS = int(os.environ.get('LOGIN_ACCOUNT_LOCK_SECONDS', '600'))
LOGIN_SCAN_UNIQUE_USERNAME_THRESHOLD = int(os.environ.get("LOGIN_SCAN_UNIQUE_USERNAME_THRESHOLD", "8")) LOGIN_SCAN_UNIQUE_USERNAME_THRESHOLD = int(os.environ.get('LOGIN_SCAN_UNIQUE_USERNAME_THRESHOLD', '8'))
LOGIN_SCAN_WINDOW_SECONDS = int(os.environ.get("LOGIN_SCAN_WINDOW_SECONDS", "600")) LOGIN_SCAN_WINDOW_SECONDS = int(os.environ.get('LOGIN_SCAN_WINDOW_SECONDS', '600'))
LOGIN_SCAN_COOLDOWN_SECONDS = int(os.environ.get("LOGIN_SCAN_COOLDOWN_SECONDS", "600")) LOGIN_SCAN_COOLDOWN_SECONDS = int(os.environ.get('LOGIN_SCAN_COOLDOWN_SECONDS', '600'))
EMAIL_RATE_LIMIT_MAX = int(os.environ.get("EMAIL_RATE_LIMIT_MAX", "6")) EMAIL_RATE_LIMIT_MAX = int(os.environ.get('EMAIL_RATE_LIMIT_MAX', '6'))
EMAIL_RATE_LIMIT_WINDOW_SECONDS = int(os.environ.get("EMAIL_RATE_LIMIT_WINDOW_SECONDS", "3600")) EMAIL_RATE_LIMIT_WINDOW_SECONDS = int(os.environ.get('EMAIL_RATE_LIMIT_WINDOW_SECONDS', '3600'))
LOGIN_ALERT_ENABLED = os.environ.get("LOGIN_ALERT_ENABLED", "true").lower() == "true" LOGIN_ALERT_ENABLED = os.environ.get('LOGIN_ALERT_ENABLED', 'true').lower() == 'true'
LOGIN_ALERT_MIN_INTERVAL_SECONDS = int(os.environ.get("LOGIN_ALERT_MIN_INTERVAL_SECONDS", "3600")) LOGIN_ALERT_MIN_INTERVAL_SECONDS = int(os.environ.get('LOGIN_ALERT_MIN_INTERVAL_SECONDS', '3600'))
ADMIN_REAUTH_WINDOW_SECONDS = int(os.environ.get("ADMIN_REAUTH_WINDOW_SECONDS", "600")) ADMIN_REAUTH_WINDOW_SECONDS = int(os.environ.get('ADMIN_REAUTH_WINDOW_SECONDS', '600'))
SECURITY_ENABLED = os.environ.get("SECURITY_ENABLED", "true").lower() == "true" SECURITY_ENABLED = os.environ.get('SECURITY_ENABLED', 'true').lower() == 'true'
SECURITY_LOG_LEVEL = os.environ.get("SECURITY_LOG_LEVEL", "INFO") SECURITY_LOG_LEVEL = os.environ.get('SECURITY_LOG_LEVEL', 'INFO')
HONEYPOT_ENABLED = os.environ.get("HONEYPOT_ENABLED", "true").lower() == "true" HONEYPOT_ENABLED = os.environ.get('HONEYPOT_ENABLED', 'true').lower() == 'true'
AUTO_BAN_ENABLED = os.environ.get("AUTO_BAN_ENABLED", "true").lower() == "true" AUTO_BAN_ENABLED = os.environ.get('AUTO_BAN_ENABLED', 'true').lower() == 'true'
@classmethod @classmethod
def validate(cls): def validate(cls):
@@ -285,38 +239,12 @@ class Config:
if cls.DB_POOL_SIZE < 1: if cls.DB_POOL_SIZE < 1:
errors.append("DB_POOL_SIZE必须大于0") errors.append("DB_POOL_SIZE必须大于0")
if cls.DB_CONNECT_TIMEOUT_SECONDS < 1:
errors.append("DB_CONNECT_TIMEOUT_SECONDS必须大于0")
if cls.DB_BUSY_TIMEOUT_MS < 100:
errors.append("DB_BUSY_TIMEOUT_MS必须至少100毫秒")
if cls.DB_CACHE_SIZE_KB < 1024:
errors.append("DB_CACHE_SIZE_KB建议至少1024")
if cls.DB_WAL_AUTOCHECKPOINT_PAGES < 100:
errors.append("DB_WAL_AUTOCHECKPOINT_PAGES建议至少100")
if cls.DB_MMAP_SIZE_MB < 0:
errors.append("DB_MMAP_SIZE_MB不能为负数")
if cls.DB_LOCK_RETRY_COUNT < 0:
errors.append("DB_LOCK_RETRY_COUNT不能为负数")
if cls.DB_LOCK_RETRY_BASE_MS < 10:
errors.append("DB_LOCK_RETRY_BASE_MS建议至少10毫秒")
if cls.DB_SLOW_QUERY_MS < 0:
errors.append("DB_SLOW_QUERY_MS不能为负数")
if cls.DB_SLOW_QUERY_SQL_MAX_LEN < 80:
errors.append("DB_SLOW_QUERY_SQL_MAX_LEN建议至少80")
if cls.DB_SLOW_SQL_WINDOW_SECONDS < 600:
errors.append("DB_SLOW_SQL_WINDOW_SECONDS建议至少600")
if cls.DB_SLOW_SQL_TOP_LIMIT < 5:
errors.append("DB_SLOW_SQL_TOP_LIMIT建议至少5")
if cls.DB_SLOW_SQL_RECENT_LIMIT < 10:
errors.append("DB_SLOW_SQL_RECENT_LIMIT建议至少10")
if cls.DB_SLOW_SQL_MAX_EVENTS < cls.DB_SLOW_SQL_RECENT_LIMIT:
errors.append("DB_SLOW_SQL_MAX_EVENTS必须不小于DB_SLOW_SQL_RECENT_LIMIT")
# 验证日志配置 # 验证日志配置
if cls.LOG_LEVEL not in ["DEBUG", "INFO", "WARNING", "ERROR", "CRITICAL"]: if cls.LOG_LEVEL not in ['DEBUG', 'INFO', 'WARNING', 'ERROR', 'CRITICAL']:
errors.append(f"LOG_LEVEL无效: {cls.LOG_LEVEL}") errors.append(f"LOG_LEVEL无效: {cls.LOG_LEVEL}")
if cls.SECURITY_LOG_LEVEL not in ["DEBUG", "INFO", "WARNING", "ERROR", "CRITICAL"]: if cls.SECURITY_LOG_LEVEL not in ['DEBUG', 'INFO', 'WARNING', 'ERROR', 'CRITICAL']:
errors.append(f"SECURITY_LOG_LEVEL无效: {cls.SECURITY_LOG_LEVEL}") errors.append(f"SECURITY_LOG_LEVEL无效: {cls.SECURITY_LOG_LEVEL}")
return errors return errors
@@ -342,14 +270,12 @@ class Config:
class DevelopmentConfig(Config): class DevelopmentConfig(Config):
"""开发环境配置""" """开发环境配置"""
DEBUG = True DEBUG = True
# 不覆盖SESSION_COOKIE_SECURE使用父类的环境变量配置 # 不覆盖SESSION_COOKIE_SECURE使用父类的环境变量配置
class ProductionConfig(Config): class ProductionConfig(Config):
"""生产环境配置""" """生产环境配置"""
DEBUG = False DEBUG = False
# 不覆盖SESSION_COOKIE_SECURE使用父类的环境变量配置 # 不覆盖SESSION_COOKIE_SECURE使用父类的环境变量配置
# 如需HTTPS请在环境变量中设置 SESSION_COOKIE_SECURE=true # 如需HTTPS请在环境变量中设置 SESSION_COOKIE_SECURE=true
@@ -357,27 +283,26 @@ class ProductionConfig(Config):
class TestingConfig(Config): class TestingConfig(Config):
"""测试环境配置""" """测试环境配置"""
DEBUG = True DEBUG = True
TESTING = True TESTING = True
DB_FILE = "data/test_app_data.db" DB_FILE = 'data/test_app_data.db'
# 根据环境变量选择配置 # 根据环境变量选择配置
config_map = { config_map = {
"development": DevelopmentConfig, 'development': DevelopmentConfig,
"production": ProductionConfig, 'production': ProductionConfig,
"testing": TestingConfig, 'testing': TestingConfig,
} }
def get_config(): def get_config():
"""获取当前环境的配置""" """获取当前环境的配置"""
env = os.environ.get("FLASK_ENV", "production") env = os.environ.get('FLASK_ENV', 'production')
return config_map.get(env, ProductionConfig) return config_map.get(env, ProductionConfig)
if __name__ == "__main__": if __name__ == '__main__':
# 配置验证测试 # 配置验证测试
config = get_config() config = get_config()
errors = config.validate() errors = config.validate()
@@ -387,5 +312,5 @@ if __name__ == "__main__":
for error in errors: for error in errors:
print(f"{error}") print(f"{error}")
else: else:
print("[OK] 配置验证通过") print(" 配置验证通过")
config.print_config() config.print_config()

View File

@@ -7,7 +7,6 @@
import logging import logging
import os import os
import re
from logging.handlers import RotatingFileHandler from logging.handlers import RotatingFileHandler
from datetime import datetime from datetime import datetime
import threading import threading
@@ -46,31 +45,6 @@ class ColoredFormatter(logging.Formatter):
return result return result
class SensitiveDataFilter(logging.Filter):
"""对日志中的敏感字段做统一脱敏处理。"""
_EMAIL_RE = re.compile(r"\b[A-Za-z0-9._%+-]+@[A-Za-z0-9.-]+\.[A-Za-z]{2,}\b")
_PAIR_PATTERNS = (
(re.compile(r"(?i)\b(password|passwd|pwd)\s*[:=]\s*([^,\s]+)"), r"\1=[REDACTED]"),
(re.compile(r"(?i)\b(token|csrf_token|session|authorization)\s*[:=]\s*([^,\s]+)"), r"\1=[REDACTED]"),
(re.compile(r"(?i)\b(user_id|admin_id|token_id)\s*=\s*\d+\b"), r"\1=[MASKED]"),
)
def filter(self, record: logging.LogRecord) -> bool:
try:
message = record.getMessage()
sanitized = self._EMAIL_RE.sub("[REDACTED_EMAIL]", message)
for pattern, replacement in self._PAIR_PATTERNS:
sanitized = pattern.sub(replacement, sanitized)
if sanitized != message:
record.msg = sanitized
record.args = ()
except Exception:
# 日志过滤异常不应影响业务日志输出
pass
return True
def setup_logger(name='app', level=None, log_file=None, max_bytes=10*1024*1024, backup_count=5): def setup_logger(name='app', level=None, log_file=None, max_bytes=10*1024*1024, backup_count=5):
""" """
设置日志记录器 设置日志记录器
@@ -100,17 +74,6 @@ def setup_logger(name='app', level=None, log_file=None, max_bytes=10*1024*1024,
# 清除已有的处理器(避免重复) # 清除已有的处理器(避免重复)
logger.handlers.clear() logger.handlers.clear()
logger.filters.clear()
# 全局敏感日志脱敏(默认开启,可通过 LOG_REDACT_SENSITIVE=0 关闭)
redact_enabled = str(os.environ.get("LOG_REDACT_SENSITIVE", "1")).strip().lower() in {
"1",
"true",
"yes",
"on",
}
if redact_enabled:
logger.addFilter(SensitiveDataFilter())
# 日志格式 # 日志格式
detailed_formatter = logging.Formatter( detailed_formatter = logging.Formatter(
@@ -318,9 +281,9 @@ def init_logging(log_level='INFO', log_file='logs/app.log'):
# 创建审计日志器已在AuditLogger中创建 # 创建审计日志器已在AuditLogger中创建
try: try:
get_logger('app').info("[OK] 日志系统初始化完成") get_logger('app').info(" 日志系统初始化完成")
except Exception: except Exception:
print("[OK] 日志系统初始化完成") print(" 日志系统初始化完成")
if __name__ == '__main__': if __name__ == '__main__':

View File

@@ -9,7 +9,6 @@ import os
import re import re
import time import time
import hashlib import hashlib
import hmac
import secrets import secrets
import ipaddress import ipaddress
import socket import socket
@@ -79,13 +78,7 @@ def sanitize_filename(filename):
class IPRateLimiter: class IPRateLimiter:
"""IP访问频率限制器""" """IP访问频率限制器"""
def __init__( def __init__(self, max_attempts=10, window_seconds=3600, lock_duration=3600):
self,
max_attempts=10,
window_seconds=3600,
lock_duration=3600,
max_tracked_ips=20000,
):
""" """
初始化限流器 初始化限流器
@@ -97,7 +90,6 @@ class IPRateLimiter:
self.max_attempts = max_attempts self.max_attempts = max_attempts
self.window_seconds = window_seconds self.window_seconds = window_seconds
self.lock_duration = lock_duration self.lock_duration = lock_duration
self.max_tracked_ips = max(1000, int(max_tracked_ips or 0))
# IP访问记录: {ip: [(timestamp, success), ...]} # IP访问记录: {ip: [(timestamp, success), ...]}
self._attempts = defaultdict(list) self._attempts = defaultdict(list)
@@ -105,47 +97,6 @@ class IPRateLimiter:
self._locked = {} self._locked = {}
self._lock = threading.Lock() self._lock = threading.Lock()
def _prune_if_oversized(self, now_ts: float) -> None:
"""限制内部映射大小避免在高频随机IP攻击下持续膨胀。"""
tracked = len(self._attempts) + len(self._locked)
if tracked <= self.max_tracked_ips:
return
cutoff_time = now_ts - self.window_seconds
for ip in list(self._attempts.keys()):
self._attempts[ip] = [
(ts, succ) for ts, succ in self._attempts[ip]
if ts > cutoff_time
]
if not self._attempts[ip]:
del self._attempts[ip]
for ip in list(self._locked.keys()):
if now_ts >= self._locked[ip]:
del self._locked[ip]
tracked = len(self._attempts) + len(self._locked)
if tracked <= self.max_tracked_ips:
return
# 优先按“最近访问时间最早”淘汰 attempts 中的 IP 记录。
overflow = tracked - self.max_tracked_ips
oldest = []
for ip, attempt_items in self._attempts.items():
if attempt_items:
oldest.append((attempt_items[-1][0], ip))
else:
oldest.append((0.0, ip))
oldest.sort(key=lambda item: item[0])
removed = 0
for _, ip in oldest:
self._attempts.pop(ip, None)
self._locked.pop(ip, None)
removed += 1
if removed >= overflow:
break
def is_locked(self, ip_address): def is_locked(self, ip_address):
""" """
检查IP是否被锁定 检查IP是否被锁定
@@ -178,7 +129,6 @@ class IPRateLimiter:
""" """
with self._lock: with self._lock:
now = time.time() now = time.time()
self._prune_if_oversized(now)
# 清理过期记录 # 清理过期记录
cutoff_time = now - self.window_seconds cutoff_time = now - self.window_seconds
@@ -247,9 +197,6 @@ class IPRateLimiter:
# 全局IP限流器实例 # 全局IP限流器实例
ip_rate_limiter = IPRateLimiter() ip_rate_limiter = IPRateLimiter()
_TRUTHY_VALUES = {"1", "true", "yes", "on"}
_TRUST_PROXY_HEADERS = str(os.environ.get("TRUST_PROXY_HEADERS", "false") or "").strip().lower() in _TRUTHY_VALUES
def require_ip_not_locked(f): def require_ip_not_locked(f):
"""装饰器检查IP是否被锁定""" """装饰器检查IP是否被锁定"""
@@ -407,19 +354,7 @@ def generate_csrf_token():
def validate_csrf_token(token): def validate_csrf_token(token):
"""验证CSRF令牌""" """验证CSRF令牌"""
expected = session.get("csrf_token") return token == session.get('csrf_token')
if (token is None) or (expected is None):
return False
provided_text = str(token or "")
expected_text = str(expected or "")
if (not provided_text) or (not expected_text):
return False
return hmac.compare_digest(
provided_text.encode("utf-8"),
expected_text.encode("utf-8"),
)
# ==================== 内容安全 ==================== # ==================== 内容安全 ====================
@@ -508,7 +443,7 @@ def get_client_ip(trust_proxy=False):
""" """
# 安全说明X-Forwarded-For 可被伪造 # 安全说明X-Forwarded-For 可被伪造
# 仅在确认请求来自可信代理时才使用代理头 # 仅在确认请求来自可信代理时才使用代理头
if trust_proxy and _TRUST_PROXY_HEADERS: if trust_proxy:
if request.headers.get('X-Forwarded-For'): if request.headers.get('X-Forwarded-For'):
return request.headers.get('X-Forwarded-For').split(',')[0].strip() return request.headers.get('X-Forwarded-For').split(',')[0].strip()
elif request.headers.get('X-Real-IP'): elif request.headers.get('X-Real-IP'):
@@ -518,90 +453,30 @@ def get_client_ip(trust_proxy=False):
return request.remote_addr return request.remote_addr
def _load_trusted_proxy_networks():
"""加载可信代理 CIDR 列表。"""
default_cidrs = "127.0.0.1/32,::1/128"
raw = str(os.environ.get("TRUSTED_PROXY_CIDRS", default_cidrs) or "").strip()
if not raw:
return []
networks = []
for segment in raw.split(","):
cidr_text = str(segment or "").strip()
if not cidr_text:
continue
try:
networks.append(ipaddress.ip_network(cidr_text, strict=False))
except ValueError:
continue
return networks
_TRUSTED_PROXY_NETWORKS = _load_trusted_proxy_networks()
def _parse_ip_address(candidate: str):
try:
return ipaddress.ip_address(str(candidate or "").strip())
except ValueError:
return None
def _is_trusted_proxy_ip(ip_obj) -> bool:
if ip_obj is None:
return False
for network in _TRUSTED_PROXY_NETWORKS:
try:
if ip_obj.version != network.version:
continue
if ip_obj in network:
return True
except Exception:
continue
return False
def _extract_real_ip_from_forwarded_chain() -> str | None:
"""基于 X-Forwarded-For 链反向提取最靠近应用侧的“非代理”来源 IP。"""
forwarded = str(request.headers.get("X-Forwarded-For", "") or "")
candidates = []
for segment in forwarded.split(","):
ip_text = str(segment or "").strip()
ip_obj = _parse_ip_address(ip_text)
if ip_obj is None:
continue
candidates.append((str(ip_obj), ip_obj))
# 若存在 X-Forwarded-For按“从右到左”剥离可信代理。
if candidates:
for ip_text, ip_obj in reversed(candidates):
if _is_trusted_proxy_ip(ip_obj):
continue
return ip_text
return candidates[0][0]
real_ip_text = str(request.headers.get("X-Real-IP", "") or "").strip()
real_ip_obj = _parse_ip_address(real_ip_text)
if real_ip_obj is None:
return None
return str(real_ip_obj)
def get_rate_limit_ip() -> str: def get_rate_limit_ip() -> str:
"""在可信代理场景下取真实IP用于限流/风控。""" """在可信代理场景下取真实IP用于限流/风控。"""
remote_addr = request.remote_addr or "" remote_addr = request.remote_addr or ""
if not _TRUST_PROXY_HEADERS: try:
return remote_addr remote_ip = ipaddress.ip_address(remote_addr)
except ValueError:
remote_ip = None
remote_ip = _parse_ip_address(remote_addr) if remote_ip and (remote_ip.is_private or remote_ip.is_loopback or remote_ip.is_link_local):
if remote_ip is None: forwarded = request.headers.get("X-Forwarded-For", "")
return remote_addr if forwarded:
candidate = forwarded.split(",")[0].strip()
# 仅当请求来自可信代理时才信任转发头。 try:
if _is_trusted_proxy_ip(remote_ip): ipaddress.ip_address(candidate)
forwarded_real_ip = _extract_real_ip_from_forwarded_chain() return candidate
if forwarded_real_ip: except ValueError:
return forwarded_real_ip pass
real_ip = request.headers.get("X-Real-IP", "").strip()
if real_ip:
try:
ipaddress.ip_address(real_ip)
return real_ip
except ValueError:
pass
return remote_addr return remote_addr

214
browser_installer.py Executable file
View File

@@ -0,0 +1,214 @@
#!/usr/bin/env python3
# -*- coding: utf-8 -*-
"""
浏览器自动下载安装模块
检测本地是否有Playwright浏览器如果没有则自动下载安装
"""
import os
import sys
import shutil
import subprocess
from pathlib import Path
# 设置浏览器安装路径支持Docker和本地环境
# Docker环境: PLAYWRIGHT_BROWSERS_PATH环境变量已设置为 /ms-playwright
# 本地环境: 使用Playwright默认路径
if 'PLAYWRIGHT_BROWSERS_PATH' in os.environ:
BROWSERS_PATH = os.environ['PLAYWRIGHT_BROWSERS_PATH']
else:
# Windows: %USERPROFILE%\AppData\Local\ms-playwright
# Linux: ~/.cache/ms-playwright
if sys.platform == 'win32':
BROWSERS_PATH = str(Path.home() / "AppData" / "Local" / "ms-playwright")
else:
BROWSERS_PATH = str(Path.home() / ".cache" / "ms-playwright")
os.environ["PLAYWRIGHT_BROWSERS_PATH"] = BROWSERS_PATH
class BrowserInstaller:
"""浏览器安装器"""
def __init__(self, log_callback=None):
"""
初始化安装器
Args:
log_callback: 日志回调函数
"""
self.log_callback = log_callback
def log(self, message):
"""输出日志"""
if self.log_callback:
self.log_callback(message)
else:
try:
print(message)
except UnicodeEncodeError:
# 如果打印Unicode字符失败替换特殊字符
safe_message = message.replace('', '[OK]').replace('', '[X]')
print(safe_message)
def check_playwright_installed(self):
"""检查Playwright是否已安装"""
try:
import playwright
self.log("✓ Playwright已安装")
return True
except ImportError:
self.log("✗ Playwright未安装")
return False
def check_chromium_installed(self):
"""检查Chromium浏览器是否已安装"""
try:
from playwright.sync_api import sync_playwright
# 尝试启动浏览器检查是否可用
with sync_playwright() as p:
try:
# 使用超时快速检查
browser = p.chromium.launch(headless=True, timeout=5000)
browser.close()
self.log("✓ Chromium浏览器已安装且可用")
return True
except Exception as e:
error_msg = str(e)
self.log(f"✗ Chromium浏览器不可用: {error_msg}")
# 检查是否是路径不存在的错误
if "Executable doesn't exist" in error_msg:
self.log("检测到浏览器文件缺失,需要重新安装")
return False
except Exception as e:
self.log(f"✗ 检查浏览器时出错: {str(e)}")
return False
def install_chromium(self):
"""安装Chromium浏览器"""
try:
self.log("正在安装 Chromium 浏览器...")
# 查找 playwright 可执行文件
playwright_cli = None
possible_paths = [
os.path.join(os.path.dirname(sys.executable), "Scripts", "playwright.exe"),
os.path.join(os.path.dirname(sys.executable), "playwright.exe"),
os.path.join(os.path.dirname(sys.executable), "Scripts", "playwright"),
os.path.join(os.path.dirname(sys.executable), "playwright"),
"playwright", # 系统PATH中
]
for path in possible_paths:
if os.path.exists(path) or shutil.which(path):
playwright_cli = path
break
# 如果找到了 playwright CLI直接调用
if playwright_cli:
self.log(f"使用 Playwright CLI: {playwright_cli}")
result = subprocess.run(
[playwright_cli, "install", "chromium"],
capture_output=True,
text=True,
timeout=300
)
else:
# 检测是否是 Nuitka 编译的程序
is_nuitka = hasattr(sys, 'frozen') or '__compiled__' in globals()
if is_nuitka:
self.log("检测到 Nuitka 编译环境")
self.log("✗ 无法找到 playwright CLI 工具")
self.log("请手动运行: playwright install chromium")
return False
else:
# 使用 python -m
result = subprocess.run(
[sys.executable, "-m", "playwright", "install", "chromium"],
capture_output=True,
text=True,
timeout=300
)
if result.returncode == 0:
self.log("✓ Chromium浏览器安装成功")
return True
else:
self.log(f"✗ 浏览器安装失败: {result.stderr}")
return False
except subprocess.TimeoutExpired:
self.log("✗ 浏览器安装超时")
return False
except Exception as e:
self.log(f"✗ 浏览器安装出错: {str(e)}")
return False
def auto_install(self):
"""
自动检测并安装所需环境
Returns:
是否成功安装或已安装
"""
self.log("=" * 60)
self.log("检查浏览器环境...")
self.log("=" * 60)
# 1. 检查Playwright是否安装
if not self.check_playwright_installed():
self.log("✗ Playwright未安装无法继续")
self.log("请确保程序包含 Playwright 库")
return False
# 2. 检查Chromium浏览器是否安装
if not self.check_chromium_installed():
self.log("\n未检测到Chromium浏览器开始自动安装...")
# 安装浏览器
if not self.install_chromium():
self.log("✗ 浏览器安装失败")
self.log("\n您可以尝试以下方法:")
self.log("1. 手动执行: playwright install chromium")
self.log("2. 检查网络连接后重试")
self.log("3. 检查防火墙设置")
return False
self.log("\n" + "=" * 60)
self.log("✓ 浏览器环境检查完成,一切就绪!")
self.log("=" * 60 + "\n")
return True
def check_and_install_browser(log_callback=None):
"""
便捷函数:检查并安装浏览器
Args:
log_callback: 日志回调函数
Returns:
是否成功
"""
installer = BrowserInstaller(log_callback)
return installer.auto_install()
# 测试代码
if __name__ == "__main__":
print("浏览器自动安装工具")
print("=" * 60)
installer = BrowserInstaller()
success = installer.auto_install()
if success:
print("\n✓ 安装成功!您现在可以运行主程序了。")
else:
print("\n✗ 安装失败,请查看上方错误信息。")
print("=" * 60)

View File

@@ -9,88 +9,10 @@ import time
from typing import Callable, Optional, Dict, Any from typing import Callable, Optional, Dict, Any
# 安全修复: 将魔法数字提取为可配置常量 # 安全修复: 将魔法数字提取为可配置常量
BROWSER_IDLE_TIMEOUT = int(os.environ.get("BROWSER_IDLE_TIMEOUT", "300")) # 空闲超时(秒)默认5分钟 BROWSER_IDLE_TIMEOUT = int(os.environ.get('BROWSER_IDLE_TIMEOUT', '300')) # 空闲超时(秒)默认5分钟
TASK_QUEUE_TIMEOUT = int(os.environ.get("TASK_QUEUE_TIMEOUT", "10")) # 队列获取超时(秒) TASK_QUEUE_TIMEOUT = int(os.environ.get('TASK_QUEUE_TIMEOUT', '10')) # 队列获取超时(秒)
TASK_QUEUE_MAXSIZE = int(os.environ.get("BROWSER_TASK_QUEUE_MAXSIZE", "200")) # 队列最大长度(0表示无限制) TASK_QUEUE_MAXSIZE = int(os.environ.get('BROWSER_TASK_QUEUE_MAXSIZE', '200')) # 队列最大长度(0表示无限制)
BROWSER_MAX_USE_COUNT = int(os.environ.get("BROWSER_MAX_USE_COUNT", "0")) # 每个执行环境最大复用次数(0表示不限制) BROWSER_MAX_USE_COUNT = int(os.environ.get('BROWSER_MAX_USE_COUNT', '0')) # 每个执行环境最大复用次数(0表示不限制)
# 新增:自适应资源配置
ADAPTIVE_CONFIG = os.environ.get("BROWSER_ADAPTIVE_CONFIG", "1").strip().lower() in ("1", "true", "yes", "on")
LOAD_HISTORY_SIZE = 50 # 负载历史记录大小
class AdaptiveResourceManager:
"""自适应资源管理器"""
def __init__(self):
self._load_history = []
self._current_load = 0
self._last_adjustment = 0
self._adjustment_cooldown = 30 # 调整冷却时间30秒
def record_task_interval(self, interval: float):
"""记录任务间隔,更新负载历史"""
if len(self._load_history) >= LOAD_HISTORY_SIZE:
self._load_history.pop(0)
self._load_history.append(interval)
# 计算当前负载
if len(self._load_history) >= 2:
recent_intervals = self._load_history[-10:] # 最近10个任务
avg_interval = sum(recent_intervals) / len(recent_intervals)
# 负载越高,间隔越短
self._current_load = 1.0 / max(avg_interval, 0.1)
def should_adjust_timeout(self) -> bool:
"""判断是否应该调整超时配置"""
if not ADAPTIVE_CONFIG:
return False
current_time = time.time()
if current_time - self._last_adjustment < self._adjustment_cooldown:
return False
return len(self._load_history) >= 10 # 至少需要10个数据点
def calculate_optimal_idle_timeout(self) -> int:
"""基于历史负载计算最优空闲超时"""
if not self._load_history:
return BROWSER_IDLE_TIMEOUT
# 计算最近任务间隔的平均值
recent_intervals = self._load_history[-20:] # 最近20个任务
if len(recent_intervals) < 2:
return BROWSER_IDLE_TIMEOUT
avg_interval = sum(recent_intervals) / len(recent_intervals)
# 根据负载动态调整超时
# 高负载时缩短超时,低负载时延长超时
if self._current_load > 2.0: # 高负载
optimal_timeout = min(avg_interval * 1.5, 600) # 最多10分钟
elif self._current_load < 0.5: # 低负载
optimal_timeout = min(avg_interval * 3.0, 1800) # 最多30分钟
else: # 正常负载
optimal_timeout = min(avg_interval * 2.0, 900) # 最多15分钟
return max(int(optimal_timeout), 60) # 最少1分钟
def get_optimal_queue_timeout(self) -> int:
"""获取最优队列超时"""
if not self._load_history:
return TASK_QUEUE_TIMEOUT
# 根据任务频率调整队列超时
if self._current_load > 2.0: # 高负载时减少等待
return max(TASK_QUEUE_TIMEOUT // 2, 3)
elif self._current_load < 0.5: # 低负载时可以增加等待
return min(TASK_QUEUE_TIMEOUT * 2, 30)
else:
return TASK_QUEUE_TIMEOUT
def record_adjustment(self):
"""记录一次调整操作"""
self._last_adjustment = time.time()
class BrowserWorker(threading.Thread): class BrowserWorker(threading.Thread):
@@ -114,13 +36,6 @@ class BrowserWorker(threading.Thread):
self.failed_tasks = 0 self.failed_tasks = 0
self.pre_warm = pre_warm self.pre_warm = pre_warm
self.last_activity_ts = 0.0 self.last_activity_ts = 0.0
self.task_start_time = 0.0
# 初始化自适应资源管理器
if ADAPTIVE_CONFIG:
self._adaptive_mgr = AdaptiveResourceManager()
else:
self._adaptive_mgr = None
def log(self, message: str): def log(self, message: str):
"""日志输出""" """日志输出"""
@@ -133,9 +48,9 @@ class BrowserWorker(threading.Thread):
"""创建截图执行环境(逻辑占位,无需真实浏览器)""" """创建截图执行环境(逻辑占位,无需真实浏览器)"""
created_at = time.time() created_at = time.time()
self.browser_instance = { self.browser_instance = {
"created_at": created_at, 'created_at': created_at,
"use_count": 0, 'use_count': 0,
"worker_id": self.worker_id, 'worker_id': self.worker_id,
} }
self.last_activity_ts = created_at self.last_activity_ts = created_at
self.log("截图执行环境就绪") self.log("截图执行环境就绪")
@@ -179,28 +94,14 @@ class BrowserWorker(threading.Thread):
# 从队列获取任务(带超时,以便能响应停止信号和空闲检查) # 从队列获取任务(带超时,以便能响应停止信号和空闲检查)
self.idle = True self.idle = True
# 使用自适应队列超时
queue_timeout = (
self._adaptive_mgr.get_optimal_queue_timeout() if self._adaptive_mgr else TASK_QUEUE_TIMEOUT
)
try: try:
task = self.task_queue.get(timeout=queue_timeout) task = self.task_queue.get(timeout=TASK_QUEUE_TIMEOUT)
except queue.Empty: except queue.Empty:
# 检查是否需要释放空闲的执行环境 # 检查是否需要释放空闲的执行环境
if self.browser_instance and self.last_activity_ts > 0: if self.browser_instance and self.last_activity_ts > 0:
idle_time = time.time() - self.last_activity_ts idle_time = time.time() - self.last_activity_ts
if idle_time > BROWSER_IDLE_TIMEOUT:
# 使用自适应空闲超时 self.log(f"空闲{int(idle_time)}秒,释放执行环境")
optimal_timeout = (
self._adaptive_mgr.calculate_optimal_idle_timeout()
if self._adaptive_mgr
else BROWSER_IDLE_TIMEOUT
)
if idle_time > optimal_timeout:
self.log(f"空闲{int(idle_time)}秒(优化超时:{optimal_timeout}秒),释放执行环境")
self._close_browser() self._close_browser()
continue continue
@@ -245,40 +146,21 @@ class BrowserWorker(threading.Thread):
continue continue
# 执行任务 # 执行任务
task_func = task.get("func") task_func = task.get('func')
task_args = task.get("args", ()) task_args = task.get('args', ())
task_kwargs = task.get("kwargs", {}) task_kwargs = task.get('kwargs', {})
callback = task.get("callback") callback = task.get('callback')
self.total_tasks += 1 self.total_tasks += 1
self.browser_instance['use_count'] += 1
# 确保browser_instance存在后再访问
if self.browser_instance is None:
self.log("执行环境不可用,任务失败")
if callable(callback):
callback(None, "执行环境不可用")
self.failed_tasks += 1
continue
self.browser_instance["use_count"] += 1
self.log(f"开始执行任务(第{self.browser_instance['use_count']}次执行)") self.log(f"开始执行任务(第{self.browser_instance['use_count']}次执行)")
# 记录任务开始时间
task_start_time = time.time()
try: try:
# 将执行环境实例传递给任务函数 # 将执行环境实例传递给任务函数
result = task_func(self.browser_instance, *task_args, **task_kwargs) result = task_func(self.browser_instance, *task_args, **task_kwargs)
callback(result, None) callback(result, None)
self.log(f"任务执行成功") self.log(f"任务执行成功")
# 记录任务完成并更新负载历史
task_end_time = time.time()
task_interval = task_end_time - task_start_time
if self._adaptive_mgr:
self._adaptive_mgr.record_task_interval(task_interval)
self.last_activity_ts = time.time() self.last_activity_ts = time.time()
except Exception as e: except Exception as e:
@@ -294,7 +176,7 @@ class BrowserWorker(threading.Thread):
# 定期重启执行环境,释放可能累积的资源 # 定期重启执行环境,释放可能累积的资源
if self.browser_instance and BROWSER_MAX_USE_COUNT > 0: if self.browser_instance and BROWSER_MAX_USE_COUNT > 0:
if self.browser_instance.get("use_count", 0) >= BROWSER_MAX_USE_COUNT: if self.browser_instance.get('use_count', 0) >= BROWSER_MAX_USE_COUNT:
self.log(f"执行环境已复用{self.browser_instance['use_count']}次,重启释放资源") self.log(f"执行环境已复用{self.browser_instance['use_count']}次,重启释放资源")
self._close_browser() self._close_browser()
@@ -349,7 +231,7 @@ class BrowserWorkerPool:
self.workers.append(worker) self.workers.append(worker)
self.initialized = True self.initialized = True
self.log(f"[OK] 截图线程池初始化完成({self.pool_size}个worker就绪执行环境将在有任务时按需启动") self.log(f" 截图线程池初始化完成({self.pool_size}个worker就绪执行环境将在有任务时按需启动")
# 初始化完成后默认预热1个执行环境降低容器重启后前几批任务的冷启动开销 # 初始化完成后默认预热1个执行环境降低容器重启后前几批任务的冷启动开销
self.warmup(1) self.warmup(1)
@@ -381,7 +263,7 @@ class BrowserWorkerPool:
time.sleep(0.1) time.sleep(0.1)
warmed = sum(1 for w in target_workers if w.browser_instance) warmed = sum(1 for w in target_workers if w.browser_instance)
self.log(f"[OK] 截图线程池预热完成({warmed}个执行环境就绪)") self.log(f" 截图线程池预热完成({warmed}个执行环境就绪)")
return warmed return warmed
def submit_task(self, task_func: Callable, callback: Callable, *args, **kwargs) -> bool: def submit_task(self, task_func: Callable, callback: Callable, *args, **kwargs) -> bool:
@@ -401,11 +283,11 @@ class BrowserWorkerPool:
return False return False
task = { task = {
"func": task_func, 'func': task_func,
"args": args, 'args': args,
"kwargs": kwargs, 'kwargs': kwargs,
"callback": callback, 'callback': callback,
"retry_count": 0, 'retry_count': 0,
} }
try: try:
@@ -446,15 +328,15 @@ class BrowserWorkerPool:
) )
return { return {
"pool_size": self.pool_size, 'pool_size': self.pool_size,
"idle_workers": idle_count, 'idle_workers': idle_count,
"busy_workers": max(0, len(workers) - idle_count), 'busy_workers': max(0, len(workers) - idle_count),
"queue_size": self.task_queue.qsize(), 'queue_size': self.task_queue.qsize(),
"total_tasks": total_tasks, 'total_tasks': total_tasks,
"failed_tasks": failed_tasks, 'failed_tasks': failed_tasks,
"success_rate": f"{(total_tasks - failed_tasks) / total_tasks * 100:.1f}%" if total_tasks > 0 else "N/A", 'success_rate': f"{(total_tasks - failed_tasks) / total_tasks * 100:.1f}%" if total_tasks > 0 else "N/A",
"workers": worker_details, 'workers': worker_details,
"timestamp": time.time(), 'timestamp': time.time(),
} }
def wait_for_completion(self, timeout: Optional[float] = None): def wait_for_completion(self, timeout: Optional[float] = None):
@@ -484,7 +366,7 @@ class BrowserWorkerPool:
self.workers.clear() self.workers.clear()
self.initialized = False self.initialized = False
self.log("[OK] 工作线程池已关闭") self.log(" 工作线程池已关闭")
# 全局实例 # 全局实例
@@ -553,7 +435,7 @@ def shutdown_browser_worker_pool():
_global_pool = None _global_pool = None
if __name__ == "__main__": if __name__ == '__main__':
# 测试代码 # 测试代码
print("测试截图工作线程池...") print("测试截图工作线程池...")
@@ -561,7 +443,7 @@ if __name__ == "__main__":
"""测试任务访问URL""" """测试任务访问URL"""
print(f"[Task-{task_id}] 开始访问: {url}") print(f"[Task-{task_id}] 开始访问: {url}")
time.sleep(2) # 模拟截图耗时 time.sleep(2) # 模拟截图耗时
return {"task_id": task_id, "url": url, "status": "success"} return {'task_id': task_id, 'url': url, 'status': 'success'}
def test_callback(result, error): def test_callback(result, error):
"""测试回调""" """测试回调"""

View File

@@ -4,17 +4,10 @@
加密工具模块 加密工具模块
用于加密存储敏感信息(如第三方账号密码) 用于加密存储敏感信息(如第三方账号密码)
使用Fernet对称加密 使用Fernet对称加密
安全增强版本 - 2026-01-21
- 支持 ENCRYPTION_KEY_RAW 直接使用 Fernet 密钥
- 增加密钥丢失保护机制
- 增加启动时密钥验证
""" """
import os import os
import sys
import base64 import base64
import threading
from pathlib import Path from pathlib import Path
from cryptography.fernet import Fernet from cryptography.fernet import Fernet
from cryptography.hazmat.primitives import hashes from cryptography.hazmat.primitives import hashes
@@ -28,37 +21,18 @@ ENCRYPTION_KEY_FILE = os.environ.get('ENCRYPTION_KEY_FILE', 'data/encryption_key
ENCRYPTION_SALT_FILE = os.environ.get('ENCRYPTION_SALT_FILE', 'data/encryption_salt.bin') ENCRYPTION_SALT_FILE = os.environ.get('ENCRYPTION_SALT_FILE', 'data/encryption_salt.bin')
def _ensure_private_dir(path: Path) -> None:
if not path:
return
os.makedirs(path, mode=0o700, exist_ok=True)
try:
os.chmod(path, 0o700)
except Exception:
pass
def _ensure_private_file(path: Path) -> None:
try:
os.chmod(path, 0o600)
except Exception:
pass
def _get_or_create_salt(): def _get_or_create_salt():
"""获取或创建盐值""" """获取或创建盐值"""
salt_path = Path(ENCRYPTION_SALT_FILE) salt_path = Path(ENCRYPTION_SALT_FILE)
if salt_path.exists(): if salt_path.exists():
_ensure_private_file(salt_path)
with open(salt_path, 'rb') as f: with open(salt_path, 'rb') as f:
return f.read() return f.read()
# 生成新的盐值 # 生成新的盐值
salt = os.urandom(16) salt = os.urandom(16)
_ensure_private_dir(salt_path.parent) os.makedirs(salt_path.parent, exist_ok=True)
with open(salt_path, 'wb') as f: with open(salt_path, 'wb') as f:
f.write(salt) f.write(salt)
_ensure_private_file(salt_path)
return salt return salt
@@ -73,103 +47,40 @@ def _derive_key(password: bytes, salt: bytes) -> bytes:
return base64.urlsafe_b64encode(kdf.derive(password)) return base64.urlsafe_b64encode(kdf.derive(password))
def _check_existing_encrypted_data() -> bool:
"""
检查是否存在已加密的数据
用于防止在有加密数据的情况下意外生成新密钥
"""
try:
import sqlite3
db_path = os.environ.get('DB_FILE', 'data/app_data.db')
if not Path(db_path).exists():
return False
conn = sqlite3.connect(db_path)
cursor = conn.cursor()
cursor.execute("SELECT COUNT(*) FROM accounts WHERE password LIKE 'gAAAAA%'")
count = cursor.fetchone()[0]
conn.close()
return count > 0
except Exception as e:
logger.warning(f"检查加密数据时出错: {e}")
return False
def get_encryption_key(): def get_encryption_key():
""" """获取加密密钥(优先环境变量,否则从文件读取或生成)"""
获取加密密钥 # 优先从环境变量读取
优先级:
1. ENCRYPTION_KEY_RAW - 直接使用 Fernet 密钥(推荐用于 Docker 部署)
2. ENCRYPTION_KEY - 通过 PBKDF2 派生密钥
3. 从文件读取
4. 生成新密钥(仅在无现有加密数据时)
"""
# 优先级 1: 直接使用 Fernet 密钥(推荐)
raw_key = os.environ.get('ENCRYPTION_KEY_RAW')
if raw_key:
logger.info("使用环境变量 ENCRYPTION_KEY_RAW 作为加密密钥")
return raw_key.encode() if isinstance(raw_key, str) else raw_key
# 优先级 2: 从环境变量派生密钥
env_key = os.environ.get('ENCRYPTION_KEY') env_key = os.environ.get('ENCRYPTION_KEY')
if env_key: if env_key:
logger.info("使用环境变量 ENCRYPTION_KEY 派生加密密钥") # 使用环境变量中的密钥派生Fernet密钥
salt = _get_or_create_salt() salt = _get_or_create_salt()
return _derive_key(env_key.encode(), salt) return _derive_key(env_key.encode(), salt)
# 优先级 3: 从文件读取 # 从文件读取
key_path = Path(ENCRYPTION_KEY_FILE) key_path = Path(ENCRYPTION_KEY_FILE)
if key_path.exists(): if key_path.exists():
logger.info(f"从文件 {ENCRYPTION_KEY_FILE} 读取加密密钥")
_ensure_private_file(key_path)
with open(key_path, 'rb') as f: with open(key_path, 'rb') as f:
return f.read() return f.read()
# 优先级 4: 生成新密钥(带保护检查)
# 安全检查:如果已有加密数据,禁止生成新密钥
if _check_existing_encrypted_data():
error_msg = (
"\n" + "=" * 60 + "\n"
"[严重错误] 检测到数据库中存在已加密的密码数据,但加密密钥文件丢失!\n"
"\n"
"这将导致所有已加密的密码无法解密!\n"
"\n"
"解决方案:\n"
"1. 恢复 data/encryption_key.bin 文件(如有备份)\n"
"2. 或在 docker-compose.yml 中设置 ENCRYPTION_KEY_RAW 环境变量\n"
"3. 如果密钥确实丢失,需要重新录入所有账号密码\n"
"\n"
+ "=" * 60
)
logger.error(error_msg)
print(error_msg, file=sys.stderr)
raise RuntimeError("加密密钥丢失且存在已加密数据,请恢复密钥后再启动")
# 生成新的密钥 # 生成新的密钥
key = Fernet.generate_key() key = Fernet.generate_key()
_ensure_private_dir(key_path.parent) os.makedirs(key_path.parent, exist_ok=True)
with open(key_path, 'wb') as f: with open(key_path, 'wb') as f:
f.write(key) f.write(key)
_ensure_private_file(key_path)
logger.info(f"已生成新的加密密钥并保存到 {ENCRYPTION_KEY_FILE}") logger.info(f"已生成新的加密密钥并保存到 {ENCRYPTION_KEY_FILE}")
logger.warning("请立即备份此密钥文件,并建议设置 ENCRYPTION_KEY_RAW 环境变量!")
return key return key
# 全局Fernet实例 # 全局Fernet实例
_fernet = None _fernet = None
_fernet_lock = threading.Lock()
def _get_fernet(): def _get_fernet():
"""获取Fernet加密器懒加载""" """获取Fernet加密器懒加载"""
global _fernet global _fernet
if _fernet is None: if _fernet is None:
with _fernet_lock: key = get_encryption_key()
if _fernet is None: _fernet = Fernet(key)
key = get_encryption_key()
_fernet = Fernet(key)
return _fernet return _fernet
@@ -209,10 +120,7 @@ def decrypt_password(encrypted_password: str) -> str:
decrypted = fernet.decrypt(encrypted_password.encode('utf-8')) decrypted = fernet.decrypt(encrypted_password.encode('utf-8'))
return decrypted.decode('utf-8') return decrypted.decode('utf-8')
except Exception as e: except Exception as e:
# 解密失败,可能是旧的明文密码或密钥不匹配 # 解密失败,可能是旧的明文密码
if is_encrypted(encrypted_password):
logger.error(f"密码解密失败(密钥可能不匹配): {e}")
return ''
logger.warning(f"密码解密失败,可能是未加密的旧数据: {e}") logger.warning(f"密码解密失败,可能是未加密的旧数据: {e}")
return encrypted_password return encrypted_password
@@ -230,6 +138,7 @@ def is_encrypted(password: str) -> bool:
""" """
if not password: if not password:
return False return False
# Fernet加密的数据是base64编码以'gAAAAA'开头
return password.startswith('gAAAAA') return password.startswith('gAAAAA')
@@ -248,39 +157,6 @@ def migrate_password(password: str) -> str:
return encrypt_password(password) return encrypt_password(password)
def verify_encryption_key() -> bool:
"""
验证当前密钥是否能解密现有数据
用于启动时检查
Returns:
bool: 密钥是否有效
"""
try:
import sqlite3
db_path = os.environ.get('DB_FILE', 'data/app_data.db')
if not Path(db_path).exists():
return True
conn = sqlite3.connect(db_path)
cursor = conn.cursor()
cursor.execute("SELECT password FROM accounts WHERE password LIKE 'gAAAAA%' LIMIT 1")
row = cursor.fetchone()
conn.close()
if not row:
return True
# 尝试解密
fernet = _get_fernet()
fernet.decrypt(row[0].encode('utf-8'))
logger.info("加密密钥验证成功")
return True
except Exception as e:
logger.error(f"加密密钥验证失败: {e}")
return False
if __name__ == '__main__': if __name__ == '__main__':
# 测试加密解密 # 测试加密解密
test_password = "test_password_123" test_password = "test_password_123"
@@ -293,6 +169,3 @@ if __name__ == '__main__':
print(f"加密解密成功: {test_password == decrypted}") print(f"加密解密成功: {test_password == decrypted}")
print(f"是否已加密: {is_encrypted(encrypted)}") print(f"是否已加密: {is_encrypted(encrypted)}")
print(f"明文是否加密: {is_encrypted(test_password)}") print(f"明文是否加密: {is_encrypted(test_password)}")
# 验证密钥
print(f"\n密钥验证: {verify_encryption_key()}")

View File

@@ -19,16 +19,13 @@ from typing import Optional
import db_pool import db_pool
from app_config import get_config from app_config import get_config
from app_logger import get_logger
from db.schema import ensure_schema from db.schema import ensure_schema
from db.migrations import migrate_database as _migrate_database from db.migrations import migrate_database as _migrate_database
from db.admin import ( from db.admin import (
admin_reset_user_password, admin_reset_user_password,
clean_old_operation_logs, clean_old_operation_logs,
get_admin_by_id,
ensure_default_admin, ensure_default_admin,
get_admin_by_username,
get_hourly_registration_count, get_hourly_registration_count,
get_system_config_raw as _get_system_config_raw, get_system_config_raw as _get_system_config_raw,
get_system_stats, get_system_stats,
@@ -74,15 +71,6 @@ from db.feedbacks import (
get_user_feedbacks, get_user_feedbacks,
reply_feedback, reply_feedback,
) )
from db.passkeys import (
count_passkeys,
create_passkey,
delete_passkey,
get_passkey_by_credential_id,
get_passkey_by_id,
list_passkeys,
update_passkey_usage,
)
from db.schedules import ( from db.schedules import (
clean_old_schedule_logs, clean_old_schedule_logs,
create_schedule_execution_log, create_schedule_execution_log,
@@ -109,7 +97,6 @@ from db.users import (
delete_user, delete_user,
extend_user_vip, extend_user_vip,
get_all_users, get_all_users,
get_users_count,
get_pending_users, get_pending_users,
get_user_by_id, get_user_by_id,
get_user_by_username, get_user_by_username,
@@ -128,13 +115,12 @@ from db.users import (
from db.security import record_login_context from db.security import record_login_context
config = get_config() config = get_config()
logger = get_logger(__name__)
# 数据库文件路径 # 数据库文件路径
DB_FILE = config.DB_FILE DB_FILE = config.DB_FILE
# 数据库版本 (用于迁移管理) # 数据库版本 (用于迁移管理)
DB_VERSION = 21 DB_VERSION = 17
# ==================== 系统配置缓存P1 / O-03 ==================== # ==================== 系统配置缓存P1 / O-03 ====================
@@ -143,9 +129,9 @@ _system_config_cache_lock = threading.Lock()
_system_config_cache_value: Optional[dict] = None _system_config_cache_value: Optional[dict] = None
_system_config_cache_loaded_at = 0.0 _system_config_cache_loaded_at = 0.0
try: try:
_SYSTEM_CONFIG_CACHE_TTL_SECONDS = float(os.environ.get("SYSTEM_CONFIG_CACHE_TTL_SECONDS", "30")) _SYSTEM_CONFIG_CACHE_TTL_SECONDS = float(os.environ.get("SYSTEM_CONFIG_CACHE_TTL_SECONDS", "3"))
except Exception: except Exception:
_SYSTEM_CONFIG_CACHE_TTL_SECONDS = 30.0 _SYSTEM_CONFIG_CACHE_TTL_SECONDS = 3.0
_SYSTEM_CONFIG_CACHE_TTL_SECONDS = max(0.0, _SYSTEM_CONFIG_CACHE_TTL_SECONDS) _SYSTEM_CONFIG_CACHE_TTL_SECONDS = max(0.0, _SYSTEM_CONFIG_CACHE_TTL_SECONDS)
@@ -156,37 +142,6 @@ def invalidate_system_config_cache() -> None:
_system_config_cache_loaded_at = 0.0 _system_config_cache_loaded_at = 0.0
def _normalize_system_config_value(value) -> dict:
try:
return dict(value or {})
except Exception:
return {}
def _is_system_config_cache_valid(now_ts: float) -> bool:
if _system_config_cache_value is None:
return False
if _SYSTEM_CONFIG_CACHE_TTL_SECONDS <= 0:
return True
return (now_ts - _system_config_cache_loaded_at) < _SYSTEM_CONFIG_CACHE_TTL_SECONDS
def _read_system_config_cache(now_ts: float, *, ignore_ttl: bool = False) -> Optional[dict]:
with _system_config_cache_lock:
if _system_config_cache_value is None:
return None
if (not ignore_ttl) and (not _is_system_config_cache_valid(now_ts)):
return None
return dict(_system_config_cache_value)
def _write_system_config_cache(value: dict, now_ts: float) -> None:
global _system_config_cache_value, _system_config_cache_loaded_at
with _system_config_cache_lock:
_system_config_cache_value = dict(value)
_system_config_cache_loaded_at = now_ts
def init_database(): def init_database():
"""初始化数据库表结构 + 迁移(入口统一)。""" """初始化数据库表结构 + 迁移(入口统一)。"""
db_pool.init_pool(DB_FILE, pool_size=config.DB_POOL_SIZE) db_pool.init_pool(DB_FILE, pool_size=config.DB_POOL_SIZE)
@@ -197,12 +152,6 @@ def init_database():
ensure_default_admin() ensure_default_admin()
try:
config_value = get_system_config()
db_pool.configure_slow_query_runtime(threshold_ms=config_value.get("db_slow_query_ms"))
except Exception as e:
logger.warning(f"初始化慢查询阈值失败,使用默认值: {e}")
def migrate_database(): def migrate_database():
"""数据库迁移(对外保留接口)。""" """数据库迁移(对外保留接口)。"""
@@ -216,21 +165,19 @@ def migrate_database():
def get_system_config(): def get_system_config():
"""获取系统配置(带进程内缓存)。""" """获取系统配置(带进程内缓存)。"""
global _system_config_cache_value, _system_config_cache_loaded_at
now_ts = time.time() now_ts = time.time()
with _system_config_cache_lock:
if _system_config_cache_value is not None:
if _SYSTEM_CONFIG_CACHE_TTL_SECONDS <= 0 or (now_ts - _system_config_cache_loaded_at) < _SYSTEM_CONFIG_CACHE_TTL_SECONDS:
return dict(_system_config_cache_value)
cached_value = _read_system_config_cache(now_ts) value = _get_system_config_raw()
if cached_value is not None:
return cached_value
try: with _system_config_cache_lock:
value = _normalize_system_config_value(_get_system_config_raw()) _system_config_cache_value = dict(value)
except Exception: _system_config_cache_loaded_at = now_ts
fallback_value = _read_system_config_cache(now_ts, ignore_ttl=True)
if fallback_value is not None:
return fallback_value
raise
_write_system_config_cache(value, now_ts)
return dict(value) return dict(value)
@@ -260,7 +207,6 @@ def update_system_config(
kdocs_admin_notify_email=None, kdocs_admin_notify_email=None,
kdocs_row_start=None, kdocs_row_start=None,
kdocs_row_end=None, kdocs_row_end=None,
db_slow_query_ms=None,
): ):
"""更新系统配置(写入后立即失效缓存)。""" """更新系统配置(写入后立即失效缓存)。"""
ok = _update_system_config( ok = _update_system_config(
@@ -289,13 +235,7 @@ def update_system_config(
kdocs_admin_notify_email=kdocs_admin_notify_email, kdocs_admin_notify_email=kdocs_admin_notify_email,
kdocs_row_start=kdocs_row_start, kdocs_row_start=kdocs_row_start,
kdocs_row_end=kdocs_row_end, kdocs_row_end=kdocs_row_end,
db_slow_query_ms=db_slow_query_ms,
) )
if ok: if ok:
invalidate_system_config_cache() invalidate_system_config_cache()
try:
latest_config = get_system_config()
db_pool.configure_slow_query_runtime(threshold_ms=latest_config.get("db_slow_query_ms"))
except Exception as e:
logger.warning(f"更新慢查询阈值失败,保留当前配置: {e}")
return ok return ok

View File

@@ -6,51 +6,19 @@ import db_pool
from crypto_utils import decrypt_password, encrypt_password from crypto_utils import decrypt_password, encrypt_password
from db.utils import get_cst_now_str from db.utils import get_cst_now_str
_ACCOUNT_STATUS_QUERY_SQL = """
SELECT status, login_fail_count, last_login_error
FROM accounts
WHERE id = ?
"""
def _decode_account_password(account_dict: dict) -> dict:
account_dict["password"] = decrypt_password(account_dict.get("password", ""))
return account_dict
def _normalize_account_ids(account_ids) -> list[str]:
normalized = []
seen = set()
for account_id in account_ids or []:
if not account_id:
continue
account_key = str(account_id)
if account_key in seen:
continue
seen.add(account_key)
normalized.append(account_key)
return normalized
def create_account(user_id, account_id, username, password, remember=True, remark=""): def create_account(user_id, account_id, username, password, remember=True, remark=""):
"""创建账号(密码加密存储)""" """创建账号(密码加密存储)"""
with db_pool.get_db() as conn: with db_pool.get_db() as conn:
cursor = conn.cursor() cursor = conn.cursor()
cst_time = get_cst_now_str()
encrypted_password = encrypt_password(password) encrypted_password = encrypt_password(password)
cursor.execute( cursor.execute(
""" """
INSERT INTO accounts (id, user_id, username, password, remember, remark, created_at) INSERT INTO accounts (id, user_id, username, password, remember, remark, created_at)
VALUES (?, ?, ?, ?, ?, ?, ?) VALUES (?, ?, ?, ?, ?, ?, ?)
""", """,
( (account_id, user_id, username, encrypted_password, 1 if remember else 0, remark, cst_time),
account_id,
user_id,
username,
encrypted_password,
1 if remember else 0,
remark,
get_cst_now_str(),
),
) )
conn.commit() conn.commit()
return cursor.lastrowid return cursor.lastrowid
@@ -61,7 +29,12 @@ def get_user_accounts(user_id):
with db_pool.get_db() as conn: with db_pool.get_db() as conn:
cursor = conn.cursor() cursor = conn.cursor()
cursor.execute("SELECT * FROM accounts WHERE user_id = ? ORDER BY created_at DESC", (user_id,)) cursor.execute("SELECT * FROM accounts WHERE user_id = ? ORDER BY created_at DESC", (user_id,))
return [_decode_account_password(dict(row)) for row in cursor.fetchall()] accounts = []
for row in cursor.fetchall():
account = dict(row)
account["password"] = decrypt_password(account.get("password", ""))
accounts.append(account)
return accounts
def get_account(account_id): def get_account(account_id):
@@ -70,9 +43,11 @@ def get_account(account_id):
cursor = conn.cursor() cursor = conn.cursor()
cursor.execute("SELECT * FROM accounts WHERE id = ?", (account_id,)) cursor.execute("SELECT * FROM accounts WHERE id = ?", (account_id,))
row = cursor.fetchone() row = cursor.fetchone()
if not row: if row:
return None account = dict(row)
return _decode_account_password(dict(row)) account["password"] = decrypt_password(account.get("password", ""))
return account
return None
def update_account_remark(account_id, remark): def update_account_remark(account_id, remark):
@@ -103,21 +78,33 @@ def increment_account_login_fail(account_id, error_message):
if not row: if not row:
return False return False
fail_count = int(row["login_fail_count"] or 0) + 1 fail_count = (row["login_fail_count"] or 0) + 1
is_suspended = fail_count >= 3
if fail_count >= 3:
cursor.execute(
"""
UPDATE accounts
SET login_fail_count = ?,
last_login_error = ?,
status = 'suspended'
WHERE id = ?
""",
(fail_count, error_message, account_id),
)
conn.commit()
return True
cursor.execute( cursor.execute(
""" """
UPDATE accounts UPDATE accounts
SET login_fail_count = ?, SET login_fail_count = ?,
last_login_error = ?, last_login_error = ?
status = CASE WHEN ? = 1 THEN 'suspended' ELSE status END
WHERE id = ? WHERE id = ?
""", """,
(fail_count, error_message, 1 if is_suspended else 0, account_id), (fail_count, error_message, account_id),
) )
conn.commit() conn.commit()
return is_suspended return False
def reset_account_login_status(account_id): def reset_account_login_status(account_id):
@@ -142,22 +129,29 @@ def get_account_status(account_id):
"""获取账号状态信息""" """获取账号状态信息"""
with db_pool.get_db() as conn: with db_pool.get_db() as conn:
cursor = conn.cursor() cursor = conn.cursor()
cursor.execute(_ACCOUNT_STATUS_QUERY_SQL, (account_id,)) cursor.execute(
"""
SELECT status, login_fail_count, last_login_error
FROM accounts
WHERE id = ?
""",
(account_id,),
)
return cursor.fetchone() return cursor.fetchone()
def get_account_status_batch(account_ids): def get_account_status_batch(account_ids):
"""批量获取账号状态信息""" """批量获取账号状态信息"""
normalized_ids = _normalize_account_ids(account_ids) account_ids = [str(account_id) for account_id in (account_ids or []) if account_id]
if not normalized_ids: if not account_ids:
return {} return {}
results = {} results = {}
chunk_size = 900 # 避免触发 SQLite 绑定参数上限 chunk_size = 900 # 避免触发 SQLite 绑定参数上限
with db_pool.get_db() as conn: with db_pool.get_db() as conn:
cursor = conn.cursor() cursor = conn.cursor()
for idx in range(0, len(normalized_ids), chunk_size): for idx in range(0, len(account_ids), chunk_size):
chunk = normalized_ids[idx : idx + chunk_size] chunk = account_ids[idx : idx + chunk_size]
placeholders = ",".join("?" for _ in chunk) placeholders = ",".join("?" for _ in chunk)
cursor.execute( cursor.execute(
f""" f"""

View File

@@ -2,9 +2,10 @@
# -*- coding: utf-8 -*- # -*- coding: utf-8 -*-
from __future__ import annotations from __future__ import annotations
import os
import sqlite3 import sqlite3
from pathlib import Path from datetime import datetime, timedelta
import pytz
import db_pool import db_pool
from db.utils import get_cst_now_str from db.utils import get_cst_now_str
@@ -15,123 +16,6 @@ from password_utils import (
verify_password_sha256, verify_password_sha256,
) )
_DEFAULT_SYSTEM_CONFIG = {
"max_concurrent_global": 2,
"max_concurrent_per_account": 1,
"max_screenshot_concurrent": 3,
"db_slow_query_ms": 120,
"schedule_enabled": 0,
"schedule_time": "02:00",
"schedule_browse_type": "应读",
"schedule_weekdays": "1,2,3,4,5,6,7",
"proxy_enabled": 0,
"proxy_api_url": "",
"proxy_expire_minutes": 3,
"enable_screenshot": 1,
"auto_approve_enabled": 0,
"auto_approve_hourly_limit": 10,
"auto_approve_vip_days": 7,
"kdocs_enabled": 0,
"kdocs_doc_url": "",
"kdocs_default_unit": "",
"kdocs_sheet_name": "",
"kdocs_sheet_index": 0,
"kdocs_unit_column": "A",
"kdocs_image_column": "D",
"kdocs_admin_notify_enabled": 0,
"kdocs_admin_notify_email": "",
"kdocs_row_start": 0,
"kdocs_row_end": 0,
}
_SYSTEM_CONFIG_UPDATERS = (
("max_concurrent_global", "max_concurrent"),
("schedule_enabled", "schedule_enabled"),
("schedule_time", "schedule_time"),
("schedule_browse_type", "schedule_browse_type"),
("schedule_weekdays", "schedule_weekdays"),
("max_concurrent_per_account", "max_concurrent_per_account"),
("max_screenshot_concurrent", "max_screenshot_concurrent"),
("db_slow_query_ms", "db_slow_query_ms"),
("enable_screenshot", "enable_screenshot"),
("proxy_enabled", "proxy_enabled"),
("proxy_api_url", "proxy_api_url"),
("proxy_expire_minutes", "proxy_expire_minutes"),
("auto_approve_enabled", "auto_approve_enabled"),
("auto_approve_hourly_limit", "auto_approve_hourly_limit"),
("auto_approve_vip_days", "auto_approve_vip_days"),
("kdocs_enabled", "kdocs_enabled"),
("kdocs_doc_url", "kdocs_doc_url"),
("kdocs_default_unit", "kdocs_default_unit"),
("kdocs_sheet_name", "kdocs_sheet_name"),
("kdocs_sheet_index", "kdocs_sheet_index"),
("kdocs_unit_column", "kdocs_unit_column"),
("kdocs_image_column", "kdocs_image_column"),
("kdocs_admin_notify_enabled", "kdocs_admin_notify_enabled"),
("kdocs_admin_notify_email", "kdocs_admin_notify_email"),
("kdocs_row_start", "kdocs_row_start"),
("kdocs_row_end", "kdocs_row_end"),
)
def _count_scalar(cursor, sql: str, params=()) -> int:
cursor.execute(sql, params)
row = cursor.fetchone()
if not row:
return 0
try:
if "count" in row.keys():
return int(row["count"] or 0)
except Exception:
pass
try:
return int(row[0] or 0)
except Exception:
return 0
def _table_exists(cursor, table_name: str) -> bool:
cursor.execute(
"""
SELECT name FROM sqlite_master
WHERE type='table' AND name=?
""",
(table_name,),
)
return bool(cursor.fetchone())
def _normalize_days(days, default: int = 30) -> int:
try:
value = int(days)
except Exception:
value = default
if value < 0:
return 0
return value
def _store_default_admin_credentials(username: str, password: str) -> str | None:
"""将首次管理员账号密码写入受限权限文件,避免打印到日志。"""
raw_path = str(
os.environ.get("DEFAULT_ADMIN_CREDENTIALS_FILE", "data/default_admin_credentials.txt") or ""
).strip()
if not raw_path:
return None
cred_path = Path(raw_path)
try:
cred_path.parent.mkdir(parents=True, exist_ok=True)
with open(cred_path, "w", encoding="utf-8") as f:
f.write("安全提醒:首次管理员账号已创建\n")
f.write(f"用户名: {username}\n")
f.write(f"密码: {password}\n")
f.write("请登录后立即修改密码,并删除该文件。\n")
os.chmod(cred_path, 0o600)
return str(cred_path)
except Exception:
return None
def ensure_default_admin() -> bool: def ensure_default_admin() -> bool:
"""确保存在默认管理员账号(行为保持不变)。""" """确保存在默认管理员账号(行为保持不变)。"""
@@ -140,12 +24,12 @@ def ensure_default_admin() -> bool:
with db_pool.get_db() as conn: with db_pool.get_db() as conn:
cursor = conn.cursor() cursor = conn.cursor()
count = _count_scalar(cursor, "SELECT COUNT(*) as count FROM admins") cursor.execute("SELECT COUNT(*) as count FROM admins")
result = cursor.fetchone()
if count == 0: if result["count"] == 0:
alphabet = string.ascii_letters + string.digits alphabet = string.ascii_letters + string.digits
bootstrap_password = str(os.environ.get("DEFAULT_ADMIN_PASSWORD", "") or "").strip() random_password = "".join(secrets.choice(alphabet) for _ in range(12))
random_password = bootstrap_password or "".join(secrets.choice(alphabet) for _ in range(12))
default_password_hash = hash_password_bcrypt(random_password) default_password_hash = hash_password_bcrypt(random_password)
cursor.execute( cursor.execute(
@@ -153,16 +37,11 @@ def ensure_default_admin() -> bool:
("admin", default_password_hash, get_cst_now_str()), ("admin", default_password_hash, get_cst_now_str()),
) )
conn.commit() conn.commit()
credential_file = _store_default_admin_credentials("admin", random_password)
print("=" * 60) print("=" * 60)
print("安全提醒:已创建默认管理员账号") print("安全提醒:已创建默认管理员账号")
print("用户名: admin") print("用户名: admin")
if credential_file: print(f"密码: {random_password}")
print(f"初始密码已写入: {credential_file}权限600") print("请立即登录后修改密码!")
print("请立即登录后修改密码,并删除该文件。")
else:
print("未能写入初始密码文件。")
print("建议设置 DEFAULT_ADMIN_PASSWORD 后重建管理员账号。")
print("=" * 60) print("=" * 60)
return True return True
return False return False
@@ -195,24 +74,6 @@ def verify_admin(username: str, password: str):
return None return None
def get_admin_by_username(username: str):
"""根据用户名获取管理员记录"""
with db_pool.get_db() as conn:
cursor = conn.cursor()
cursor.execute("SELECT * FROM admins WHERE username = ?", (username,))
row = cursor.fetchone()
return dict(row) if row else None
def get_admin_by_id(admin_id: int):
"""根据ID获取管理员记录"""
with db_pool.get_db() as conn:
cursor = conn.cursor()
cursor.execute("SELECT * FROM admins WHERE id = ?", (int(admin_id),))
row = cursor.fetchone()
return dict(row) if row else None
def update_admin_password(username: str, new_password: str) -> bool: def update_admin_password(username: str, new_password: str) -> bool:
"""更新管理员密码""" """更新管理员密码"""
with db_pool.get_db() as conn: with db_pool.get_db() as conn:
@@ -240,40 +101,49 @@ def get_system_stats() -> dict:
with db_pool.get_db() as conn: with db_pool.get_db() as conn:
cursor = conn.cursor() cursor = conn.cursor()
cursor.execute("SELECT COUNT(*) as count FROM users")
total_users = cursor.fetchone()["count"]
cursor.execute("SELECT COUNT(*) as count FROM users WHERE status = 'approved'")
approved_users = cursor.fetchone()["count"]
cursor.execute( cursor.execute(
""" """
SELECT SELECT COUNT(*) as count
COUNT(*) AS total_users,
SUM(CASE WHEN status = 'approved' THEN 1 ELSE 0 END) AS approved_users,
SUM(CASE WHEN date(created_at) = date('now', 'localtime') THEN 1 ELSE 0 END) AS new_users_today,
SUM(CASE WHEN datetime(created_at) >= datetime('now', 'localtime', '-7 days') THEN 1 ELSE 0 END) AS new_users_7d,
SUM(
CASE
WHEN vip_expire_time IS NOT NULL
AND datetime(vip_expire_time) > datetime('now', 'localtime')
THEN 1 ELSE 0
END
) AS vip_users
FROM users FROM users
WHERE date(created_at) = date('now', 'localtime')
""" """
) )
user_stats = cursor.fetchone() or {} new_users_today = cursor.fetchone()["count"]
def _to_int(key: str) -> int: cursor.execute(
try: """
return int(user_stats[key] or 0) SELECT COUNT(*) as count
except Exception: FROM users
return 0 WHERE datetime(created_at) >= datetime('now', 'localtime', '-7 days')
"""
)
new_users_7d = cursor.fetchone()["count"]
total_accounts = _count_scalar(cursor, "SELECT COUNT(*) as count FROM accounts") cursor.execute("SELECT COUNT(*) as count FROM accounts")
total_accounts = cursor.fetchone()["count"]
cursor.execute(
"""
SELECT COUNT(*) as count FROM users
WHERE vip_expire_time IS NOT NULL
AND datetime(vip_expire_time) > datetime('now', 'localtime')
"""
)
vip_users = cursor.fetchone()["count"]
return { return {
"total_users": _to_int("total_users"), "total_users": total_users,
"approved_users": _to_int("approved_users"), "approved_users": approved_users,
"new_users_today": _to_int("new_users_today"), "new_users_today": new_users_today,
"new_users_7d": _to_int("new_users_7d"), "new_users_7d": new_users_7d,
"total_accounts": total_accounts, "total_accounts": total_accounts,
"vip_users": _to_int("vip_users"), "vip_users": vip_users,
} }
@@ -283,9 +153,37 @@ def get_system_config_raw() -> dict:
cursor = conn.cursor() cursor = conn.cursor()
cursor.execute("SELECT * FROM system_config WHERE id = 1") cursor.execute("SELECT * FROM system_config WHERE id = 1")
row = cursor.fetchone() row = cursor.fetchone()
if row: if row:
return dict(row) return dict(row)
return dict(_DEFAULT_SYSTEM_CONFIG)
return {
"max_concurrent_global": 2,
"max_concurrent_per_account": 1,
"max_screenshot_concurrent": 3,
"schedule_enabled": 0,
"schedule_time": "02:00",
"schedule_browse_type": "应读",
"schedule_weekdays": "1,2,3,4,5,6,7",
"proxy_enabled": 0,
"proxy_api_url": "",
"proxy_expire_minutes": 3,
"enable_screenshot": 1,
"auto_approve_enabled": 0,
"auto_approve_hourly_limit": 10,
"auto_approve_vip_days": 7,
"kdocs_enabled": 0,
"kdocs_doc_url": "",
"kdocs_default_unit": "",
"kdocs_sheet_name": "",
"kdocs_sheet_index": 0,
"kdocs_unit_column": "A",
"kdocs_image_column": "D",
"kdocs_admin_notify_enabled": 0,
"kdocs_admin_notify_email": "",
"kdocs_row_start": 0,
"kdocs_row_end": 0,
}
def update_system_config( def update_system_config(
@@ -315,55 +213,129 @@ def update_system_config(
kdocs_admin_notify_email=None, kdocs_admin_notify_email=None,
kdocs_row_start=None, kdocs_row_start=None,
kdocs_row_end=None, kdocs_row_end=None,
db_slow_query_ms=None,
) -> bool: ) -> bool:
"""更新系统配置仅更新DB不做缓存处理""" """更新系统配置仅更新DB不做缓存处理"""
arg_values = { allowed_fields = {
"max_concurrent": max_concurrent, "max_concurrent_global",
"schedule_enabled": schedule_enabled, "schedule_enabled",
"schedule_time": schedule_time, "schedule_time",
"schedule_browse_type": schedule_browse_type, "schedule_browse_type",
"schedule_weekdays": schedule_weekdays, "schedule_weekdays",
"max_concurrent_per_account": max_concurrent_per_account, "max_concurrent_per_account",
"max_screenshot_concurrent": max_screenshot_concurrent, "max_screenshot_concurrent",
"enable_screenshot": enable_screenshot, "enable_screenshot",
"proxy_enabled": proxy_enabled, "proxy_enabled",
"proxy_api_url": proxy_api_url, "proxy_api_url",
"proxy_expire_minutes": proxy_expire_minutes, "proxy_expire_minutes",
"auto_approve_enabled": auto_approve_enabled, "auto_approve_enabled",
"auto_approve_hourly_limit": auto_approve_hourly_limit, "auto_approve_hourly_limit",
"auto_approve_vip_days": auto_approve_vip_days, "auto_approve_vip_days",
"kdocs_enabled": kdocs_enabled, "kdocs_enabled",
"kdocs_doc_url": kdocs_doc_url, "kdocs_doc_url",
"kdocs_default_unit": kdocs_default_unit, "kdocs_default_unit",
"kdocs_sheet_name": kdocs_sheet_name, "kdocs_sheet_name",
"kdocs_sheet_index": kdocs_sheet_index, "kdocs_sheet_index",
"kdocs_unit_column": kdocs_unit_column, "kdocs_unit_column",
"kdocs_image_column": kdocs_image_column, "kdocs_image_column",
"kdocs_admin_notify_enabled": kdocs_admin_notify_enabled, "kdocs_admin_notify_enabled",
"kdocs_admin_notify_email": kdocs_admin_notify_email, "kdocs_admin_notify_email",
"kdocs_row_start": kdocs_row_start, "kdocs_row_start",
"kdocs_row_end": kdocs_row_end, "kdocs_row_end",
"db_slow_query_ms": db_slow_query_ms, "updated_at",
} }
updates = []
params = []
for db_field, arg_name in _SYSTEM_CONFIG_UPDATERS:
value = arg_values.get(arg_name)
if value is None:
continue
updates.append(f"{db_field} = ?")
params.append(value)
if not updates:
return False
updates.append("updated_at = ?")
params.append(get_cst_now_str())
with db_pool.get_db() as conn: with db_pool.get_db() as conn:
cursor = conn.cursor() cursor = conn.cursor()
updates = []
params = []
if max_concurrent is not None:
updates.append("max_concurrent_global = ?")
params.append(max_concurrent)
if schedule_enabled is not None:
updates.append("schedule_enabled = ?")
params.append(schedule_enabled)
if schedule_time is not None:
updates.append("schedule_time = ?")
params.append(schedule_time)
if schedule_browse_type is not None:
updates.append("schedule_browse_type = ?")
params.append(schedule_browse_type)
if max_concurrent_per_account is not None:
updates.append("max_concurrent_per_account = ?")
params.append(max_concurrent_per_account)
if max_screenshot_concurrent is not None:
updates.append("max_screenshot_concurrent = ?")
params.append(max_screenshot_concurrent)
if enable_screenshot is not None:
updates.append("enable_screenshot = ?")
params.append(enable_screenshot)
if schedule_weekdays is not None:
updates.append("schedule_weekdays = ?")
params.append(schedule_weekdays)
if proxy_enabled is not None:
updates.append("proxy_enabled = ?")
params.append(proxy_enabled)
if proxy_api_url is not None:
updates.append("proxy_api_url = ?")
params.append(proxy_api_url)
if proxy_expire_minutes is not None:
updates.append("proxy_expire_minutes = ?")
params.append(proxy_expire_minutes)
if auto_approve_enabled is not None:
updates.append("auto_approve_enabled = ?")
params.append(auto_approve_enabled)
if auto_approve_hourly_limit is not None:
updates.append("auto_approve_hourly_limit = ?")
params.append(auto_approve_hourly_limit)
if auto_approve_vip_days is not None:
updates.append("auto_approve_vip_days = ?")
params.append(auto_approve_vip_days)
if kdocs_enabled is not None:
updates.append("kdocs_enabled = ?")
params.append(kdocs_enabled)
if kdocs_doc_url is not None:
updates.append("kdocs_doc_url = ?")
params.append(kdocs_doc_url)
if kdocs_default_unit is not None:
updates.append("kdocs_default_unit = ?")
params.append(kdocs_default_unit)
if kdocs_sheet_name is not None:
updates.append("kdocs_sheet_name = ?")
params.append(kdocs_sheet_name)
if kdocs_sheet_index is not None:
updates.append("kdocs_sheet_index = ?")
params.append(kdocs_sheet_index)
if kdocs_unit_column is not None:
updates.append("kdocs_unit_column = ?")
params.append(kdocs_unit_column)
if kdocs_image_column is not None:
updates.append("kdocs_image_column = ?")
params.append(kdocs_image_column)
if kdocs_admin_notify_enabled is not None:
updates.append("kdocs_admin_notify_enabled = ?")
params.append(kdocs_admin_notify_enabled)
if kdocs_admin_notify_email is not None:
updates.append("kdocs_admin_notify_email = ?")
params.append(kdocs_admin_notify_email)
if kdocs_row_start is not None:
updates.append("kdocs_row_start = ?")
params.append(kdocs_row_start)
if kdocs_row_end is not None:
updates.append("kdocs_row_end = ?")
params.append(kdocs_row_end)
if not updates:
return False
updates.append("updated_at = ?")
params.append(get_cst_now_str())
for update_clause in updates:
field_name = update_clause.split("=")[0].strip()
if field_name not in allowed_fields:
raise ValueError(f"非法字段名: {field_name}")
sql = f"UPDATE system_config SET {', '.join(updates)} WHERE id = 1" sql = f"UPDATE system_config SET {', '.join(updates)} WHERE id = 1"
cursor.execute(sql, params) cursor.execute(sql, params)
conn.commit() conn.commit()
@@ -374,13 +346,13 @@ def get_hourly_registration_count() -> int:
"""获取最近一小时内的注册用户数""" """获取最近一小时内的注册用户数"""
with db_pool.get_db() as conn: with db_pool.get_db() as conn:
cursor = conn.cursor() cursor = conn.cursor()
return _count_scalar( cursor.execute(
cursor,
""" """
SELECT COUNT(*) as count FROM users SELECT COUNT(*) FROM users
WHERE created_at >= datetime('now', 'localtime', '-1 hour') WHERE created_at >= datetime('now', 'localtime', '-1 hour')
""", """
) )
return cursor.fetchone()[0]
# ==================== 密码重置(管理员) ==================== # ==================== 密码重置(管理员) ====================
@@ -402,12 +374,17 @@ def admin_reset_user_password(user_id: int, new_password: str) -> bool:
def clean_old_operation_logs(days: int = 30) -> int: def clean_old_operation_logs(days: int = 30) -> int:
"""清理指定天数前的操作日志如果存在operation_logs表""" """清理指定天数前的操作日志如果存在operation_logs表"""
safe_days = _normalize_days(days, default=30)
with db_pool.get_db() as conn: with db_pool.get_db() as conn:
cursor = conn.cursor() cursor = conn.cursor()
if not _table_exists(cursor, "operation_logs"): cursor.execute(
"""
SELECT name FROM sqlite_master
WHERE type='table' AND name='operation_logs'
"""
)
if not cursor.fetchone():
return 0 return 0
try: try:
@@ -416,11 +393,11 @@ def clean_old_operation_logs(days: int = 30) -> int:
DELETE FROM operation_logs DELETE FROM operation_logs
WHERE created_at < datetime('now', 'localtime', '-' || ? || ' days') WHERE created_at < datetime('now', 'localtime', '-' || ? || ' days')
""", """,
(safe_days,), (days,),
) )
deleted_count = cursor.rowcount deleted_count = cursor.rowcount
conn.commit() conn.commit()
print(f"已清理 {deleted_count} 条旧操作日志 (>{safe_days}天)") print(f"已清理 {deleted_count} 条旧操作日志 (>{days}天)")
return deleted_count return deleted_count
except Exception as e: except Exception as e:
print(f"清理旧操作日志失败: {e}") print(f"清理旧操作日志失败: {e}")

View File

@@ -6,38 +6,12 @@ import db_pool
from db.utils import get_cst_now_str from db.utils import get_cst_now_str
def _normalize_limit(value, default: int, *, minimum: int = 1, maximum: int = 500) -> int:
try:
parsed = int(value)
except Exception:
parsed = default
parsed = max(minimum, parsed)
parsed = min(maximum, parsed)
return parsed
def _normalize_offset(value, default: int = 0) -> int:
try:
parsed = int(value)
except Exception:
parsed = default
return max(0, parsed)
def _normalize_announcement_payload(title, content, image_url):
normalized_title = str(title or "").strip()
normalized_content = str(content or "").strip()
normalized_image = str(image_url or "").strip() or None
return normalized_title, normalized_content, normalized_image
def _deactivate_all_active_announcements(cursor, cst_time: str) -> None:
cursor.execute("UPDATE announcements SET is_active = 0, updated_at = ? WHERE is_active = 1", (cst_time,))
def create_announcement(title, content, image_url=None, is_active=True): def create_announcement(title, content, image_url=None, is_active=True):
"""创建公告(默认启用;启用时会自动停用其他公告)""" """创建公告(默认启用;启用时会自动停用其他公告)"""
title, content, image_url = _normalize_announcement_payload(title, content, image_url) title = (title or "").strip()
content = (content or "").strip()
image_url = (image_url or "").strip()
image_url = image_url or None
if not title or not content: if not title or not content:
return None return None
@@ -46,7 +20,7 @@ def create_announcement(title, content, image_url=None, is_active=True):
cst_time = get_cst_now_str() cst_time = get_cst_now_str()
if is_active: if is_active:
_deactivate_all_active_announcements(cursor, cst_time) cursor.execute("UPDATE announcements SET is_active = 0, updated_at = ? WHERE is_active = 1", (cst_time,))
cursor.execute( cursor.execute(
""" """
@@ -70,9 +44,6 @@ def get_announcement_by_id(announcement_id):
def get_announcements(limit=50, offset=0): def get_announcements(limit=50, offset=0):
"""获取公告列表(管理员用)""" """获取公告列表(管理员用)"""
safe_limit = _normalize_limit(limit, 50)
safe_offset = _normalize_offset(offset, 0)
with db_pool.get_db() as conn: with db_pool.get_db() as conn:
cursor = conn.cursor() cursor = conn.cursor()
cursor.execute( cursor.execute(
@@ -81,7 +52,7 @@ def get_announcements(limit=50, offset=0):
ORDER BY created_at DESC, id DESC ORDER BY created_at DESC, id DESC
LIMIT ? OFFSET ? LIMIT ? OFFSET ?
""", """,
(safe_limit, safe_offset), (limit, offset),
) )
return [dict(row) for row in cursor.fetchall()] return [dict(row) for row in cursor.fetchall()]
@@ -93,7 +64,7 @@ def set_announcement_active(announcement_id, is_active):
cst_time = get_cst_now_str() cst_time = get_cst_now_str()
if is_active: if is_active:
_deactivate_all_active_announcements(cursor, cst_time) cursor.execute("UPDATE announcements SET is_active = 0, updated_at = ? WHERE is_active = 1", (cst_time,))
cursor.execute( cursor.execute(
""" """
UPDATE announcements UPDATE announcements
@@ -150,12 +121,13 @@ def dismiss_announcement_for_user(user_id, announcement_id):
"""用户永久关闭某条公告(幂等)""" """用户永久关闭某条公告(幂等)"""
with db_pool.get_db() as conn: with db_pool.get_db() as conn:
cursor = conn.cursor() cursor = conn.cursor()
cst_time = get_cst_now_str()
cursor.execute( cursor.execute(
""" """
INSERT OR IGNORE INTO announcement_dismissals (user_id, announcement_id, dismissed_at) INSERT OR IGNORE INTO announcement_dismissals (user_id, announcement_id, dismissed_at)
VALUES (?, ?, ?) VALUES (?, ?, ?)
""", """,
(user_id, announcement_id, get_cst_now_str()), (user_id, announcement_id, cst_time),
) )
conn.commit() conn.commit()
return cursor.rowcount >= 0 return cursor.rowcount >= 0

View File

@@ -5,27 +5,6 @@ from __future__ import annotations
import db_pool import db_pool
def _to_bool_with_default(value, default: bool = True) -> bool:
if value is None:
return default
try:
return bool(int(value))
except Exception:
try:
return bool(value)
except Exception:
return default
def _normalize_notify_enabled(enabled) -> int:
if isinstance(enabled, bool):
return 1 if enabled else 0
try:
return 1 if int(enabled) else 0
except Exception:
return 1
def get_user_by_email(email): def get_user_by_email(email):
"""根据邮箱获取用户""" """根据邮箱获取用户"""
with db_pool.get_db() as conn: with db_pool.get_db() as conn:
@@ -46,7 +25,7 @@ def update_user_email(user_id, email, verified=False):
SET email = ?, email_verified = ? SET email = ?, email_verified = ?
WHERE id = ? WHERE id = ?
""", """,
(email, 1 if verified else 0, user_id), (email, int(verified), user_id),
) )
conn.commit() conn.commit()
return cursor.rowcount > 0 return cursor.rowcount > 0
@@ -63,7 +42,7 @@ def update_user_email_notify(user_id, enabled):
SET email_notify_enabled = ? SET email_notify_enabled = ?
WHERE id = ? WHERE id = ?
""", """,
(_normalize_notify_enabled(enabled), user_id), (int(enabled), user_id),
) )
conn.commit() conn.commit()
return cursor.rowcount > 0 return cursor.rowcount > 0
@@ -78,6 +57,6 @@ def get_user_email_notify(user_id):
row = cursor.fetchone() row = cursor.fetchone()
if row is None: if row is None:
return True return True
return _to_bool_with_default(row[0], default=True) return bool(row[0]) if row[0] is not None else True
except Exception: except Exception:
return True return True

View File

@@ -2,73 +2,32 @@
# -*- coding: utf-8 -*- # -*- coding: utf-8 -*-
from __future__ import annotations from __future__ import annotations
from datetime import datetime
import pytz
import db_pool import db_pool
from db.utils import escape_html, get_cst_now_str from db.utils import escape_html
def _normalize_limit(value, default: int, *, minimum: int = 1, maximum: int = 500) -> int:
try:
parsed = int(value)
except Exception:
parsed = default
parsed = max(minimum, parsed)
parsed = min(maximum, parsed)
return parsed
def _normalize_offset(value, default: int = 0) -> int:
try:
parsed = int(value)
except Exception:
parsed = default
return max(0, parsed)
def _safe_text(value) -> str:
if value is None:
return ""
text = str(value)
return escape_html(text) if text else ""
def _build_feedback_filter_sql(status_filter=None) -> tuple[str, list]:
where_clauses = ["1=1"]
params = []
if status_filter:
where_clauses.append("status = ?")
params.append(status_filter)
return " AND ".join(where_clauses), params
def _normalize_feedback_stats_row(row) -> dict:
row_dict = dict(row) if row else {}
return {
"total": int(row_dict.get("total") or 0),
"pending": int(row_dict.get("pending") or 0),
"replied": int(row_dict.get("replied") or 0),
"closed": int(row_dict.get("closed") or 0),
}
def create_bug_feedback(user_id, username, title, description, contact=""): def create_bug_feedback(user_id, username, title, description, contact=""):
"""创建Bug反馈带XSS防护""" """创建Bug反馈带XSS防护"""
with db_pool.get_db() as conn: with db_pool.get_db() as conn:
cursor = conn.cursor() cursor = conn.cursor()
cst_tz = pytz.timezone("Asia/Shanghai")
cst_time = datetime.now(cst_tz).strftime("%Y-%m-%d %H:%M:%S")
safe_title = escape_html(title) if title else ""
safe_description = escape_html(description) if description else ""
safe_contact = escape_html(contact) if contact else ""
safe_username = escape_html(username) if username else ""
cursor.execute( cursor.execute(
""" """
INSERT INTO bug_feedbacks (user_id, username, title, description, contact, created_at) INSERT INTO bug_feedbacks (user_id, username, title, description, contact, created_at)
VALUES (?, ?, ?, ?, ?, ?) VALUES (?, ?, ?, ?, ?, ?)
""", """,
( (user_id, safe_username, safe_title, safe_description, safe_contact, cst_time),
user_id,
_safe_text(username),
_safe_text(title),
_safe_text(description),
_safe_text(contact),
get_cst_now_str(),
),
) )
conn.commit() conn.commit()
@@ -77,25 +36,25 @@ def create_bug_feedback(user_id, username, title, description, contact=""):
def get_bug_feedbacks(limit=100, offset=0, status_filter=None): def get_bug_feedbacks(limit=100, offset=0, status_filter=None):
"""获取Bug反馈列表管理员用""" """获取Bug反馈列表管理员用"""
safe_limit = _normalize_limit(limit, 100, minimum=1, maximum=1000)
safe_offset = _normalize_offset(offset, 0)
with db_pool.get_db() as conn: with db_pool.get_db() as conn:
cursor = conn.cursor() cursor = conn.cursor()
where_sql, params = _build_feedback_filter_sql(status_filter=status_filter)
sql = f""" sql = "SELECT * FROM bug_feedbacks WHERE 1=1"
SELECT * FROM bug_feedbacks params = []
WHERE {where_sql}
ORDER BY created_at DESC if status_filter:
LIMIT ? OFFSET ? sql += " AND status = ?"
""" params.append(status_filter)
cursor.execute(sql, params + [safe_limit, safe_offset])
sql += " ORDER BY created_at DESC LIMIT ? OFFSET ?"
params.extend([limit, offset])
cursor.execute(sql, params)
return [dict(row) for row in cursor.fetchall()] return [dict(row) for row in cursor.fetchall()]
def get_user_feedbacks(user_id, limit=50): def get_user_feedbacks(user_id, limit=50):
"""获取用户自己的反馈列表""" """获取用户自己的反馈列表"""
safe_limit = _normalize_limit(limit, 50, minimum=1, maximum=1000)
with db_pool.get_db() as conn: with db_pool.get_db() as conn:
cursor = conn.cursor() cursor = conn.cursor()
cursor.execute( cursor.execute(
@@ -105,7 +64,7 @@ def get_user_feedbacks(user_id, limit=50):
ORDER BY created_at DESC ORDER BY created_at DESC
LIMIT ? LIMIT ?
""", """,
(user_id, safe_limit), (user_id, limit),
) )
return [dict(row) for row in cursor.fetchall()] return [dict(row) for row in cursor.fetchall()]
@@ -123,13 +82,18 @@ def reply_feedback(feedback_id, admin_reply):
"""管理员回复反馈带XSS防护""" """管理员回复反馈带XSS防护"""
with db_pool.get_db() as conn: with db_pool.get_db() as conn:
cursor = conn.cursor() cursor = conn.cursor()
cst_tz = pytz.timezone("Asia/Shanghai")
cst_time = datetime.now(cst_tz).strftime("%Y-%m-%d %H:%M:%S")
safe_reply = escape_html(admin_reply) if admin_reply else ""
cursor.execute( cursor.execute(
""" """
UPDATE bug_feedbacks UPDATE bug_feedbacks
SET admin_reply = ?, status = 'replied', replied_at = ? SET admin_reply = ?, status = 'replied', replied_at = ?
WHERE id = ? WHERE id = ?
""", """,
(_safe_text(admin_reply), get_cst_now_str(), feedback_id), (safe_reply, cst_time, feedback_id),
) )
conn.commit() conn.commit()
@@ -175,4 +139,6 @@ def get_feedback_stats():
FROM bug_feedbacks FROM bug_feedbacks
""" """
) )
return _normalize_feedback_stats_row(cursor.fetchone()) row = cursor.fetchone()
return dict(row) if row else {"total": 0, "pending": 0, "replied": 0, "closed": 0}

View File

@@ -28,143 +28,105 @@ def set_current_version(conn, version: int) -> None:
conn.commit() conn.commit()
def _table_exists(cursor, table_name: str) -> bool:
cursor.execute("SELECT name FROM sqlite_master WHERE type='table' AND name=?", (str(table_name),))
return cursor.fetchone() is not None
def _get_table_columns(cursor, table_name: str) -> set[str]:
cursor.execute(f"PRAGMA table_info({table_name})")
return {col[1] for col in cursor.fetchall()}
def _add_column_if_missing(cursor, table_name: str, columns: set[str], column_name: str, column_ddl: str, *, ok_message: str) -> bool:
if column_name in columns:
return False
cursor.execute(f"ALTER TABLE {table_name} ADD COLUMN {column_name} {column_ddl}")
columns.add(column_name)
print(ok_message)
return True
def _read_row_value(row, key: str, index: int):
if isinstance(row, sqlite3.Row):
return row[key]
return row[index]
def _get_migration_steps():
return [
(1, _migrate_to_v1),
(2, _migrate_to_v2),
(3, _migrate_to_v3),
(4, _migrate_to_v4),
(5, _migrate_to_v5),
(6, _migrate_to_v6),
(7, _migrate_to_v7),
(8, _migrate_to_v8),
(9, _migrate_to_v9),
(10, _migrate_to_v10),
(11, _migrate_to_v11),
(12, _migrate_to_v12),
(13, _migrate_to_v13),
(14, _migrate_to_v14),
(15, _migrate_to_v15),
(16, _migrate_to_v16),
(17, _migrate_to_v17),
(18, _migrate_to_v18),
(19, _migrate_to_v19),
(20, _migrate_to_v20),
(21, _migrate_to_v21),
]
def migrate_database(conn, target_version: int) -> None: def migrate_database(conn, target_version: int) -> None:
"""数据库迁移:按版本增量升级(向前兼容)。""" """数据库迁移:按版本增量升级(向前兼容)。"""
cursor = conn.cursor() cursor = conn.cursor()
cursor.execute("INSERT OR IGNORE INTO db_version (id, version, updated_at) VALUES (1, 0, ?)", (get_cst_now_str(),)) cursor.execute("INSERT OR IGNORE INTO db_version (id, version, updated_at) VALUES (1, 0, ?)", (get_cst_now_str(),))
conn.commit() conn.commit()
target_version = int(target_version)
current_version = get_current_version(conn) current_version = get_current_version(conn)
for version, migrate_fn in _get_migration_steps(): if current_version < 1:
if version > target_version or current_version >= version: _migrate_to_v1(conn)
continue current_version = 1
migrate_fn(conn) if current_version < 2:
current_version = version _migrate_to_v2(conn)
current_version = 2
if current_version < 3:
_migrate_to_v3(conn)
current_version = 3
if current_version < 4:
_migrate_to_v4(conn)
current_version = 4
if current_version < 5:
_migrate_to_v5(conn)
current_version = 5
if current_version < 6:
_migrate_to_v6(conn)
current_version = 6
if current_version < 7:
_migrate_to_v7(conn)
current_version = 7
if current_version < 8:
_migrate_to_v8(conn)
current_version = 8
if current_version < 9:
_migrate_to_v9(conn)
current_version = 9
if current_version < 10:
_migrate_to_v10(conn)
current_version = 10
if current_version < 11:
_migrate_to_v11(conn)
current_version = 11
if current_version < 12:
_migrate_to_v12(conn)
current_version = 12
if current_version < 13:
_migrate_to_v13(conn)
current_version = 13
if current_version < 14:
_migrate_to_v14(conn)
current_version = 14
if current_version < 15:
_migrate_to_v15(conn)
current_version = 15
if current_version < 16:
_migrate_to_v16(conn)
current_version = 16
if current_version < 17:
_migrate_to_v17(conn)
current_version = 17
if current_version < 18:
_migrate_to_v18(conn)
current_version = 18
stored_version = get_current_version(conn) if current_version != int(target_version):
if stored_version != current_version: set_current_version(conn, int(target_version))
set_current_version(conn, current_version)
if current_version != target_version:
print(f" [WARN] 目标版本 {target_version} 未完全可达,当前停留在 {current_version}")
def _migrate_to_v1(conn): def _migrate_to_v1(conn):
"""迁移到版本1 - 添加缺失字段""" """迁移到版本1 - 添加缺失字段"""
cursor = conn.cursor() cursor = conn.cursor()
system_columns = _get_table_columns(cursor, "system_config") cursor.execute("PRAGMA table_info(system_config)")
_add_column_if_missing( columns = [col[1] for col in cursor.fetchall()]
cursor,
"system_config",
system_columns,
"schedule_weekdays",
'TEXT DEFAULT "1,2,3,4,5,6,7"',
ok_message=" [OK] 添加 schedule_weekdays 字段",
)
_add_column_if_missing(
cursor,
"system_config",
system_columns,
"max_screenshot_concurrent",
"INTEGER DEFAULT 3",
ok_message=" [OK] 添加 max_screenshot_concurrent 字段",
)
_add_column_if_missing(
cursor,
"system_config",
system_columns,
"max_concurrent_per_account",
"INTEGER DEFAULT 1",
ok_message=" [OK] 添加 max_concurrent_per_account 字段",
)
_add_column_if_missing(
cursor,
"system_config",
system_columns,
"auto_approve_enabled",
"INTEGER DEFAULT 0",
ok_message=" [OK] 添加 auto_approve_enabled 字段",
)
_add_column_if_missing(
cursor,
"system_config",
system_columns,
"auto_approve_hourly_limit",
"INTEGER DEFAULT 10",
ok_message=" [OK] 添加 auto_approve_hourly_limit 字段",
)
_add_column_if_missing(
cursor,
"system_config",
system_columns,
"auto_approve_vip_days",
"INTEGER DEFAULT 7",
ok_message=" [OK] 添加 auto_approve_vip_days 字段",
)
task_log_columns = _get_table_columns(cursor, "task_logs") if "schedule_weekdays" not in columns:
_add_column_if_missing( cursor.execute('ALTER TABLE system_config ADD COLUMN schedule_weekdays TEXT DEFAULT "1,2,3,4,5,6,7"')
cursor, print(" ✓ 添加 schedule_weekdays 字段")
"task_logs",
task_log_columns, if "max_screenshot_concurrent" not in columns:
"duration", cursor.execute("ALTER TABLE system_config ADD COLUMN max_screenshot_concurrent INTEGER DEFAULT 3")
"INTEGER", print(" ✓ 添加 max_screenshot_concurrent 字段")
ok_message=" [OK] 添加 duration 字段到 task_logs", if "max_concurrent_per_account" not in columns:
) cursor.execute("ALTER TABLE system_config ADD COLUMN max_concurrent_per_account INTEGER DEFAULT 1")
print(" ✓ 添加 max_concurrent_per_account 字段")
if "auto_approve_enabled" not in columns:
cursor.execute("ALTER TABLE system_config ADD COLUMN auto_approve_enabled INTEGER DEFAULT 0")
print(" ✓ 添加 auto_approve_enabled 字段")
if "auto_approve_hourly_limit" not in columns:
cursor.execute("ALTER TABLE system_config ADD COLUMN auto_approve_hourly_limit INTEGER DEFAULT 10")
print(" ✓ 添加 auto_approve_hourly_limit 字段")
if "auto_approve_vip_days" not in columns:
cursor.execute("ALTER TABLE system_config ADD COLUMN auto_approve_vip_days INTEGER DEFAULT 7")
print(" ✓ 添加 auto_approve_vip_days 字段")
cursor.execute("PRAGMA table_info(task_logs)")
columns = [col[1] for col in cursor.fetchall()]
if "duration" not in columns:
cursor.execute("ALTER TABLE task_logs ADD COLUMN duration INTEGER")
print(" ✓ 添加 duration 字段到 task_logs")
conn.commit() conn.commit()
@@ -173,39 +135,24 @@ def _migrate_to_v2(conn):
"""迁移到版本2 - 添加代理配置字段""" """迁移到版本2 - 添加代理配置字段"""
cursor = conn.cursor() cursor = conn.cursor()
columns = _get_table_columns(cursor, "system_config") cursor.execute("PRAGMA table_info(system_config)")
_add_column_if_missing( columns = [col[1] for col in cursor.fetchall()]
cursor,
"system_config", if "proxy_enabled" not in columns:
columns, cursor.execute("ALTER TABLE system_config ADD COLUMN proxy_enabled INTEGER DEFAULT 0")
"proxy_enabled", print(" ✓ 添加 proxy_enabled 字段")
"INTEGER DEFAULT 0",
ok_message=" [OK] 添加 proxy_enabled 字段", if "proxy_api_url" not in columns:
) cursor.execute('ALTER TABLE system_config ADD COLUMN proxy_api_url TEXT DEFAULT ""')
_add_column_if_missing( print(" ✓ 添加 proxy_api_url 字段")
cursor,
"system_config", if "proxy_expire_minutes" not in columns:
columns, cursor.execute("ALTER TABLE system_config ADD COLUMN proxy_expire_minutes INTEGER DEFAULT 3")
"proxy_api_url", print(" ✓ 添加 proxy_expire_minutes 字段")
'TEXT DEFAULT ""',
ok_message=" [OK] 添加 proxy_api_url 字段", if "enable_screenshot" not in columns:
) cursor.execute("ALTER TABLE system_config ADD COLUMN enable_screenshot INTEGER DEFAULT 1")
_add_column_if_missing( print(" ✓ 添加 enable_screenshot 字段")
cursor,
"system_config",
columns,
"proxy_expire_minutes",
"INTEGER DEFAULT 3",
ok_message=" [OK] 添加 proxy_expire_minutes 字段",
)
_add_column_if_missing(
cursor,
"system_config",
columns,
"enable_screenshot",
"INTEGER DEFAULT 1",
ok_message=" [OK] 添加 enable_screenshot 字段",
)
conn.commit() conn.commit()
@@ -214,31 +161,20 @@ def _migrate_to_v3(conn):
"""迁移到版本3 - 添加账号状态和登录失败计数字段""" """迁移到版本3 - 添加账号状态和登录失败计数字段"""
cursor = conn.cursor() cursor = conn.cursor()
columns = _get_table_columns(cursor, "accounts") cursor.execute("PRAGMA table_info(accounts)")
_add_column_if_missing( columns = [col[1] for col in cursor.fetchall()]
cursor,
"accounts", if "status" not in columns:
columns, cursor.execute('ALTER TABLE accounts ADD COLUMN status TEXT DEFAULT "active"')
"status", print(" ✓ 添加 accounts.status 字段 (账号状态)")
'TEXT DEFAULT "active"',
ok_message=" [OK] 添加 accounts.status 字段 (账号状态)", if "login_fail_count" not in columns:
) cursor.execute("ALTER TABLE accounts ADD COLUMN login_fail_count INTEGER DEFAULT 0")
_add_column_if_missing( print(" ✓ 添加 accounts.login_fail_count 字段 (登录失败计数)")
cursor,
"accounts", if "last_login_error" not in columns:
columns, cursor.execute("ALTER TABLE accounts ADD COLUMN last_login_error TEXT")
"login_fail_count", print(" ✓ 添加 accounts.last_login_error 字段 (最后登录错误)")
"INTEGER DEFAULT 0",
ok_message=" [OK] 添加 accounts.login_fail_count 字段 (登录失败计数)",
)
_add_column_if_missing(
cursor,
"accounts",
columns,
"last_login_error",
"TEXT",
ok_message=" [OK] 添加 accounts.last_login_error 字段 (最后登录错误)",
)
conn.commit() conn.commit()
@@ -247,15 +183,12 @@ def _migrate_to_v4(conn):
"""迁移到版本4 - 添加任务来源字段""" """迁移到版本4 - 添加任务来源字段"""
cursor = conn.cursor() cursor = conn.cursor()
columns = _get_table_columns(cursor, "task_logs") cursor.execute("PRAGMA table_info(task_logs)")
_add_column_if_missing( columns = [col[1] for col in cursor.fetchall()]
cursor,
"task_logs", if "source" not in columns:
columns, cursor.execute('ALTER TABLE task_logs ADD COLUMN source TEXT DEFAULT "manual"')
"source", print(" ✓ 添加 task_logs.source 字段 (任务来源: manual/scheduled/immediate)")
'TEXT DEFAULT "manual"',
ok_message=" [OK] 添加 task_logs.source 字段 (任务来源: manual/scheduled/immediate)",
)
conn.commit() conn.commit()
@@ -286,7 +219,7 @@ def _migrate_to_v5(conn):
) )
""" """
) )
print(" [OK] 创建 user_schedules 表 (用户定时任务)") print(" 创建 user_schedules 表 (用户定时任务)")
cursor.execute( cursor.execute(
""" """
@@ -310,12 +243,12 @@ def _migrate_to_v5(conn):
) )
""" """
) )
print(" [OK] 创建 schedule_execution_logs 表 (定时任务执行日志)") print(" 创建 schedule_execution_logs 表 (定时任务执行日志)")
cursor.execute("CREATE INDEX IF NOT EXISTS idx_user_schedules_user_id ON user_schedules(user_id)") cursor.execute("CREATE INDEX IF NOT EXISTS idx_user_schedules_user_id ON user_schedules(user_id)")
cursor.execute("CREATE INDEX IF NOT EXISTS idx_user_schedules_enabled ON user_schedules(enabled)") cursor.execute("CREATE INDEX IF NOT EXISTS idx_user_schedules_enabled ON user_schedules(enabled)")
cursor.execute("CREATE INDEX IF NOT EXISTS idx_user_schedules_next_run ON user_schedules(next_run_at)") cursor.execute("CREATE INDEX IF NOT EXISTS idx_user_schedules_next_run ON user_schedules(next_run_at)")
print(" [OK] 创建 user_schedules 表索引") print(" 创建 user_schedules 表索引")
conn.commit() conn.commit()
@@ -338,10 +271,10 @@ def _migrate_to_v6(conn):
) )
""" """
) )
print(" [OK] 创建 announcements 表 (公告)") print(" 创建 announcements 表 (公告)")
cursor.execute("CREATE INDEX IF NOT EXISTS idx_announcements_active ON announcements(is_active)") cursor.execute("CREATE INDEX IF NOT EXISTS idx_announcements_active ON announcements(is_active)")
cursor.execute("CREATE INDEX IF NOT EXISTS idx_announcements_created_at ON announcements(created_at)") cursor.execute("CREATE INDEX IF NOT EXISTS idx_announcements_created_at ON announcements(created_at)")
print(" [OK] 创建 announcements 表索引") print(" 创建 announcements 表索引")
cursor.execute("SELECT name FROM sqlite_master WHERE type='table' AND name='announcement_dismissals'") cursor.execute("SELECT name FROM sqlite_master WHERE type='table' AND name='announcement_dismissals'")
if not cursor.fetchone(): if not cursor.fetchone():
@@ -357,9 +290,9 @@ def _migrate_to_v6(conn):
) )
""" """
) )
print(" [OK] 创建 announcement_dismissals 表 (公告永久关闭记录)") print(" 创建 announcement_dismissals 表 (公告永久关闭记录)")
cursor.execute("CREATE INDEX IF NOT EXISTS idx_announcement_dismissals_user ON announcement_dismissals(user_id)") cursor.execute("CREATE INDEX IF NOT EXISTS idx_announcement_dismissals_user ON announcement_dismissals(user_id)")
print(" [OK] 创建 announcement_dismissals 表索引") print(" 创建 announcement_dismissals 表索引")
conn.commit() conn.commit()
@@ -367,17 +300,20 @@ def _migrate_to_v6(conn):
def _migrate_to_v7(conn): def _migrate_to_v7(conn):
"""迁移到版本7 - 统一存储北京时间将历史UTC时间字段整体+8小时""" """迁移到版本7 - 统一存储北京时间将历史UTC时间字段整体+8小时"""
cursor = conn.cursor() cursor = conn.cursor()
columns_cache: dict[str, set[str]] = {}
def table_exists(table_name: str) -> bool:
cursor.execute("SELECT name FROM sqlite_master WHERE type='table' AND name=?", (table_name,))
return cursor.fetchone() is not None
def column_exists(table_name: str, column_name: str) -> bool:
cursor.execute(f"PRAGMA table_info({table_name})")
return any(row[1] == column_name for row in cursor.fetchall())
def shift_utc_to_cst(table_name: str, column_name: str) -> None: def shift_utc_to_cst(table_name: str, column_name: str) -> None:
if not _table_exists(cursor, table_name): if not table_exists(table_name):
return return
if not column_exists(table_name, column_name):
if table_name not in columns_cache:
columns_cache[table_name] = _get_table_columns(cursor, table_name)
if column_name not in columns_cache[table_name]:
return return
cursor.execute( cursor.execute(
f""" f"""
UPDATE {table_name} UPDATE {table_name}
@@ -393,6 +329,10 @@ def _migrate_to_v7(conn):
("accounts", "created_at"), ("accounts", "created_at"),
("password_reset_requests", "created_at"), ("password_reset_requests", "created_at"),
("password_reset_requests", "processed_at"), ("password_reset_requests", "processed_at"),
]:
shift_utc_to_cst(table, col)
for table, col in [
("smtp_configs", "created_at"), ("smtp_configs", "created_at"),
("smtp_configs", "updated_at"), ("smtp_configs", "updated_at"),
("smtp_configs", "last_success_at"), ("smtp_configs", "last_success_at"),
@@ -400,6 +340,10 @@ def _migrate_to_v7(conn):
("email_tokens", "created_at"), ("email_tokens", "created_at"),
("email_logs", "created_at"), ("email_logs", "created_at"),
("email_stats", "last_updated"), ("email_stats", "last_updated"),
]:
shift_utc_to_cst(table, col)
for table, col in [
("task_checkpoints", "created_at"), ("task_checkpoints", "created_at"),
("task_checkpoints", "updated_at"), ("task_checkpoints", "updated_at"),
("task_checkpoints", "completed_at"), ("task_checkpoints", "completed_at"),
@@ -407,7 +351,7 @@ def _migrate_to_v7(conn):
shift_utc_to_cst(table, col) shift_utc_to_cst(table, col)
conn.commit() conn.commit()
print(" [OK] 时区迁移历史UTC时间已转换为北京时间") print(" 时区迁移历史UTC时间已转换为北京时间")
def _migrate_to_v8(conn): def _migrate_to_v8(conn):
@@ -415,23 +359,15 @@ def _migrate_to_v8(conn):
cursor = conn.cursor() cursor = conn.cursor()
# 1) 增量字段random_delay旧库可能不存在 # 1) 增量字段random_delay旧库可能不存在
columns = _get_table_columns(cursor, "user_schedules") cursor.execute("PRAGMA table_info(user_schedules)")
_add_column_if_missing( columns = [col[1] for col in cursor.fetchall()]
cursor, if "random_delay" not in columns:
"user_schedules", cursor.execute("ALTER TABLE user_schedules ADD COLUMN random_delay INTEGER DEFAULT 0")
columns, print(" ✓ 添加 user_schedules.random_delay 字段")
"random_delay",
"INTEGER DEFAULT 0", if "next_run_at" not in columns:
ok_message=" [OK] 添加 user_schedules.random_delay 字段", cursor.execute("ALTER TABLE user_schedules ADD COLUMN next_run_at TIMESTAMP")
) print(" ✓ 添加 user_schedules.next_run_at 字段")
_add_column_if_missing(
cursor,
"user_schedules",
columns,
"next_run_at",
"TIMESTAMP",
ok_message=" [OK] 添加 user_schedules.next_run_at 字段",
)
cursor.execute("CREATE INDEX IF NOT EXISTS idx_user_schedules_next_run ON user_schedules(next_run_at)") cursor.execute("CREATE INDEX IF NOT EXISTS idx_user_schedules_next_run ON user_schedules(next_run_at)")
conn.commit() conn.commit()
@@ -456,12 +392,12 @@ def _migrate_to_v8(conn):
fixed = 0 fixed = 0
for row in rows: for row in rows:
try: try:
schedule_id = _read_row_value(row, "id", 0) schedule_id = row["id"] if isinstance(row, sqlite3.Row) else row[0]
schedule_time = _read_row_value(row, "schedule_time", 1) schedule_time = row["schedule_time"] if isinstance(row, sqlite3.Row) else row[1]
weekdays = _read_row_value(row, "weekdays", 2) weekdays = row["weekdays"] if isinstance(row, sqlite3.Row) else row[2]
random_delay = _read_row_value(row, "random_delay", 3) random_delay = row["random_delay"] if isinstance(row, sqlite3.Row) else row[3]
last_run_at = _read_row_value(row, "last_run_at", 4) last_run_at = row["last_run_at"] if isinstance(row, sqlite3.Row) else row[4]
next_run_at = _read_row_value(row, "next_run_at", 5) next_run_at = row["next_run_at"] if isinstance(row, sqlite3.Row) else row[5]
except Exception: except Exception:
continue continue
@@ -484,7 +420,7 @@ def _migrate_to_v8(conn):
conn.commit() conn.commit()
if fixed: if fixed:
print(f" [OK] 已为 {fixed} 条启用定时任务补算 next_run_at") print(f" 已为 {fixed} 条启用定时任务补算 next_run_at")
except Exception as e: except Exception as e:
# 迁移过程中不阻断主流程;上线后由 worker 兜底补算 # 迁移过程中不阻断主流程;上线后由 worker 兜底补算
print(f" ⚠ v8 迁移补算 next_run_at 失败: {e}") print(f" ⚠ v8 迁移补算 next_run_at 失败: {e}")
@@ -494,46 +430,27 @@ def _migrate_to_v9(conn):
"""迁移到版本9 - 邮件设置字段迁移(清理 email_service scattered ALTER TABLE""" """迁移到版本9 - 邮件设置字段迁移(清理 email_service scattered ALTER TABLE"""
cursor = conn.cursor() cursor = conn.cursor()
if not _table_exists(cursor, "email_settings"): cursor.execute("SELECT name FROM sqlite_master WHERE type='table' AND name='email_settings'")
if not cursor.fetchone():
# 邮件表由 email_service.init_email_tables 创建;此处仅做增量字段迁移 # 邮件表由 email_service.init_email_tables 创建;此处仅做增量字段迁移
return return
columns = _get_table_columns(cursor, "email_settings") cursor.execute("PRAGMA table_info(email_settings)")
columns = [col[1] for col in cursor.fetchall()]
changed = False changed = False
changed = ( if "register_verify_enabled" not in columns:
_add_column_if_missing( cursor.execute("ALTER TABLE email_settings ADD COLUMN register_verify_enabled INTEGER DEFAULT 0")
cursor, print(" ✓ 添加 email_settings.register_verify_enabled 字段")
"email_settings", changed = True
columns, if "base_url" not in columns:
"register_verify_enabled", cursor.execute("ALTER TABLE email_settings ADD COLUMN base_url TEXT DEFAULT ''")
"INTEGER DEFAULT 0", print(" ✓ 添加 email_settings.base_url 字段")
ok_message=" [OK] 添加 email_settings.register_verify_enabled 字段", changed = True
) if "task_notify_enabled" not in columns:
or changed cursor.execute("ALTER TABLE email_settings ADD COLUMN task_notify_enabled INTEGER DEFAULT 0")
) print(" ✓ 添加 email_settings.task_notify_enabled 字段")
changed = ( changed = True
_add_column_if_missing(
cursor,
"email_settings",
columns,
"base_url",
"TEXT DEFAULT ''",
ok_message=" [OK] 添加 email_settings.base_url 字段",
)
or changed
)
changed = (
_add_column_if_missing(
cursor,
"email_settings",
columns,
"task_notify_enabled",
"INTEGER DEFAULT 0",
ok_message=" [OK] 添加 email_settings.task_notify_enabled 字段",
)
or changed
)
if changed: if changed:
conn.commit() conn.commit()
@@ -542,31 +459,18 @@ def _migrate_to_v9(conn):
def _migrate_to_v10(conn): def _migrate_to_v10(conn):
"""迁移到版本10 - users 邮箱字段迁移(避免运行时 ALTER TABLE""" """迁移到版本10 - users 邮箱字段迁移(避免运行时 ALTER TABLE"""
cursor = conn.cursor() cursor = conn.cursor()
columns = _get_table_columns(cursor, "users") cursor.execute("PRAGMA table_info(users)")
columns = [col[1] for col in cursor.fetchall()]
changed = False changed = False
changed = ( if "email_verified" not in columns:
_add_column_if_missing( cursor.execute("ALTER TABLE users ADD COLUMN email_verified INTEGER DEFAULT 0")
cursor, print(" ✓ 添加 users.email_verified 字段")
"users", changed = True
columns, if "email_notify_enabled" not in columns:
"email_verified", cursor.execute("ALTER TABLE users ADD COLUMN email_notify_enabled INTEGER DEFAULT 1")
"INTEGER DEFAULT 0", print(" ✓ 添加 users.email_notify_enabled 字段")
ok_message=" [OK] 添加 users.email_verified 字段", changed = True
)
or changed
)
changed = (
_add_column_if_missing(
cursor,
"users",
columns,
"email_notify_enabled",
"INTEGER DEFAULT 1",
ok_message=" [OK] 添加 users.email_notify_enabled 字段",
)
or changed
)
if changed: if changed:
conn.commit() conn.commit()
@@ -591,7 +495,7 @@ def _migrate_to_v11(conn):
conn.commit() conn.commit()
if updated: if updated:
print(f" [OK] 已将 {updated} 个 pending 用户迁移为 approved") print(f" 已将 {updated} 个 pending 用户迁移为 approved")
except sqlite3.OperationalError as e: except sqlite3.OperationalError as e:
print(f" ⚠️ v11 迁移跳过: {e}") print(f" ⚠️ v11 迁移跳过: {e}")
@@ -753,24 +657,19 @@ def _migrate_to_v15(conn):
"""迁移到版本15 - 邮件设置:新设备登录提醒全局开关""" """迁移到版本15 - 邮件设置:新设备登录提醒全局开关"""
cursor = conn.cursor() cursor = conn.cursor()
if not _table_exists(cursor, "email_settings"): cursor.execute("SELECT name FROM sqlite_master WHERE type='table' AND name='email_settings'")
if not cursor.fetchone():
# 邮件表由 email_service.init_email_tables 创建;此处仅做增量字段迁移 # 邮件表由 email_service.init_email_tables 创建;此处仅做增量字段迁移
return return
columns = _get_table_columns(cursor, "email_settings") cursor.execute("PRAGMA table_info(email_settings)")
columns = [col[1] for col in cursor.fetchall()]
changed = False changed = False
changed = ( if "login_alert_enabled" not in columns:
_add_column_if_missing( cursor.execute("ALTER TABLE email_settings ADD COLUMN login_alert_enabled INTEGER DEFAULT 1")
cursor, print(" ✓ 添加 email_settings.login_alert_enabled 字段")
"email_settings", changed = True
columns,
"login_alert_enabled",
"INTEGER DEFAULT 1",
ok_message=" [OK] 添加 email_settings.login_alert_enabled 字段",
)
or changed
)
try: try:
cursor.execute("UPDATE email_settings SET login_alert_enabled = 1 WHERE login_alert_enabled IS NULL") cursor.execute("UPDATE email_settings SET login_alert_enabled = 1 WHERE login_alert_enabled IS NULL")
@@ -787,24 +686,22 @@ def _migrate_to_v15(conn):
def _migrate_to_v16(conn): def _migrate_to_v16(conn):
"""迁移到版本16 - 公告支持图片字段""" """迁移到版本16 - 公告支持图片字段"""
cursor = conn.cursor() cursor = conn.cursor()
columns = _get_table_columns(cursor, "announcements") cursor.execute("PRAGMA table_info(announcements)")
columns = [col[1] for col in cursor.fetchall()]
if _add_column_if_missing( if "image_url" not in columns:
cursor, cursor.execute("ALTER TABLE announcements ADD COLUMN image_url TEXT")
"announcements",
columns,
"image_url",
"TEXT",
ok_message=" [OK] 添加 announcements.image_url 字段",
):
conn.commit() conn.commit()
print(" ✓ 添加 announcements.image_url 字段")
def _migrate_to_v17(conn): def _migrate_to_v17(conn):
"""迁移到版本17 - 金山文档上传配置与用户开关""" """迁移到版本17 - 金山文档上传配置与用户开关"""
cursor = conn.cursor() cursor = conn.cursor()
system_columns = _get_table_columns(cursor, "system_config") cursor.execute("PRAGMA table_info(system_config)")
columns = [col[1] for col in cursor.fetchall()]
system_fields = [ system_fields = [
("kdocs_enabled", "INTEGER DEFAULT 0"), ("kdocs_enabled", "INTEGER DEFAULT 0"),
("kdocs_doc_url", "TEXT DEFAULT ''"), ("kdocs_doc_url", "TEXT DEFAULT ''"),
@@ -817,29 +714,21 @@ def _migrate_to_v17(conn):
("kdocs_admin_notify_email", "TEXT DEFAULT ''"), ("kdocs_admin_notify_email", "TEXT DEFAULT ''"),
] ]
for field, ddl in system_fields: for field, ddl in system_fields:
_add_column_if_missing( if field not in columns:
cursor, cursor.execute(f"ALTER TABLE system_config ADD COLUMN {field} {ddl}")
"system_config", print(f" ✓ 添加 system_config.{field} 字段")
system_columns,
field, cursor.execute("PRAGMA table_info(users)")
ddl, columns = [col[1] for col in cursor.fetchall()]
ok_message=f" [OK] 添加 system_config.{field} 字段",
)
user_columns = _get_table_columns(cursor, "users")
user_fields = [ user_fields = [
("kdocs_unit", "TEXT DEFAULT ''"), ("kdocs_unit", "TEXT DEFAULT ''"),
("kdocs_auto_upload", "INTEGER DEFAULT 0"), ("kdocs_auto_upload", "INTEGER DEFAULT 0"),
] ]
for field, ddl in user_fields: for field, ddl in user_fields:
_add_column_if_missing( if field not in columns:
cursor, cursor.execute(f"ALTER TABLE users ADD COLUMN {field} {ddl}")
"users", print(f" ✓ 添加 users.{field} 字段")
user_columns,
field,
ddl,
ok_message=f" [OK] 添加 users.{field} 字段",
)
conn.commit() conn.commit()
@@ -848,88 +737,15 @@ def _migrate_to_v18(conn):
"""迁移到版本18 - 金山文档上传:有效行范围配置""" """迁移到版本18 - 金山文档上传:有效行范围配置"""
cursor = conn.cursor() cursor = conn.cursor()
columns = _get_table_columns(cursor, "system_config") cursor.execute("PRAGMA table_info(system_config)")
_add_column_if_missing( columns = [col[1] for col in cursor.fetchall()]
cursor,
"system_config", if "kdocs_row_start" not in columns:
columns, cursor.execute("ALTER TABLE system_config ADD COLUMN kdocs_row_start INTEGER DEFAULT 0")
"kdocs_row_start", print(" ✓ 添加 system_config.kdocs_row_start 字段")
"INTEGER DEFAULT 0",
ok_message=" [OK] 添加 system_config.kdocs_row_start 字段", if "kdocs_row_end" not in columns:
) cursor.execute("ALTER TABLE system_config ADD COLUMN kdocs_row_end INTEGER DEFAULT 0")
_add_column_if_missing( print(" ✓ 添加 system_config.kdocs_row_end 字段")
cursor,
"system_config",
columns,
"kdocs_row_end",
"INTEGER DEFAULT 0",
ok_message=" [OK] 添加 system_config.kdocs_row_end 字段",
)
conn.commit()
def _migrate_to_v19(conn):
"""迁移到版本19 - 报表与调度查询复合索引优化"""
cursor = conn.cursor()
index_statements = [
"CREATE INDEX IF NOT EXISTS idx_users_status_created_at ON users(status, created_at)",
"CREATE INDEX IF NOT EXISTS idx_task_logs_status_created_at ON task_logs(status, created_at)",
"CREATE INDEX IF NOT EXISTS idx_user_schedules_enabled_next_run ON user_schedules(enabled, next_run_at)",
"CREATE INDEX IF NOT EXISTS idx_bug_feedbacks_status_created_at ON bug_feedbacks(status, created_at)",
"CREATE INDEX IF NOT EXISTS idx_bug_feedbacks_user_created_at ON bug_feedbacks(user_id, created_at)",
]
for statement in index_statements:
cursor.execute(statement)
conn.commit()
def _migrate_to_v20(conn):
"""迁移到版本20 - 慢SQL阈值系统配置"""
cursor = conn.cursor()
columns = _get_table_columns(cursor, "system_config")
_add_column_if_missing(
cursor,
"system_config",
columns,
"db_slow_query_ms",
"INTEGER DEFAULT 120",
ok_message=" [OK] 添加 system_config.db_slow_query_ms 字段",
)
conn.commit()
def _migrate_to_v21(conn):
"""迁移到版本21 - Passkey 认证设备表"""
cursor = conn.cursor()
cursor.execute(
"""
CREATE TABLE IF NOT EXISTS passkeys (
id INTEGER PRIMARY KEY AUTOINCREMENT,
owner_type TEXT NOT NULL,
owner_id INTEGER NOT NULL,
device_name TEXT NOT NULL,
credential_id TEXT UNIQUE NOT NULL,
public_key TEXT NOT NULL,
sign_count INTEGER DEFAULT 0,
transports TEXT DEFAULT '',
aaguid TEXT DEFAULT '',
created_at TIMESTAMP DEFAULT CURRENT_TIMESTAMP,
last_used_at TIMESTAMP DEFAULT CURRENT_TIMESTAMP
)
"""
)
cursor.execute("CREATE INDEX IF NOT EXISTS idx_passkeys_owner ON passkeys(owner_type, owner_id)")
cursor.execute(
"CREATE INDEX IF NOT EXISTS idx_passkeys_owner_last_used ON passkeys(owner_type, owner_id, last_used_at)"
)
conn.commit() conn.commit()

View File

@@ -1,173 +0,0 @@
#!/usr/bin/env python3
# -*- coding: utf-8 -*-
from __future__ import annotations
import sqlite3
import db_pool
from db.utils import get_cst_now_str
_OWNER_TYPES = {"user", "admin"}
def _normalize_owner_type(owner_type: str) -> str:
normalized = str(owner_type or "").strip().lower()
if normalized not in _OWNER_TYPES:
raise ValueError(f"invalid owner_type: {owner_type}")
return normalized
def list_passkeys(owner_type: str, owner_id: int) -> list[dict]:
owner = _normalize_owner_type(owner_type)
with db_pool.get_db() as conn:
cursor = conn.cursor()
cursor.execute(
"""
SELECT id, owner_type, owner_id, device_name, credential_id, transports,
sign_count, aaguid, created_at, last_used_at
FROM passkeys
WHERE owner_type = ? AND owner_id = ?
ORDER BY datetime(created_at) DESC, id DESC
""",
(owner, int(owner_id)),
)
return [dict(row) for row in cursor.fetchall()]
def count_passkeys(owner_type: str, owner_id: int) -> int:
owner = _normalize_owner_type(owner_type)
with db_pool.get_db() as conn:
cursor = conn.cursor()
cursor.execute(
"SELECT COUNT(*) AS count FROM passkeys WHERE owner_type = ? AND owner_id = ?",
(owner, int(owner_id)),
)
row = cursor.fetchone()
if not row:
return 0
try:
return int(row["count"] or 0)
except Exception:
try:
return int(row[0] or 0)
except Exception:
return 0
def get_passkey_by_credential_id(credential_id: str) -> dict | None:
credential = str(credential_id or "").strip()
if not credential:
return None
with db_pool.get_db() as conn:
cursor = conn.cursor()
cursor.execute(
"""
SELECT id, owner_type, owner_id, device_name, credential_id, public_key,
sign_count, transports, aaguid, created_at, last_used_at
FROM passkeys
WHERE credential_id = ?
""",
(credential,),
)
row = cursor.fetchone()
return dict(row) if row else None
def get_passkey_by_id(owner_type: str, owner_id: int, passkey_id: int) -> dict | None:
owner = _normalize_owner_type(owner_type)
with db_pool.get_db() as conn:
cursor = conn.cursor()
cursor.execute(
"""
SELECT id, owner_type, owner_id, device_name, credential_id, public_key,
sign_count, transports, aaguid, created_at, last_used_at
FROM passkeys
WHERE id = ? AND owner_type = ? AND owner_id = ?
""",
(int(passkey_id), owner, int(owner_id)),
)
row = cursor.fetchone()
return dict(row) if row else None
def create_passkey(
owner_type: str,
owner_id: int,
*,
credential_id: str,
public_key: str,
sign_count: int,
device_name: str,
transports: str = "",
aaguid: str = "",
) -> int | None:
owner = _normalize_owner_type(owner_type)
now = get_cst_now_str()
with db_pool.get_db() as conn:
cursor = conn.cursor()
try:
cursor.execute(
"""
INSERT INTO passkeys (
owner_type,
owner_id,
device_name,
credential_id,
public_key,
sign_count,
transports,
aaguid,
created_at,
last_used_at
)
VALUES (?, ?, ?, ?, ?, ?, ?, ?, ?, ?)
""",
(
owner,
int(owner_id),
str(device_name or "").strip(),
str(credential_id or "").strip(),
str(public_key or "").strip(),
int(sign_count or 0),
str(transports or "").strip(),
str(aaguid or "").strip(),
now,
now,
),
)
conn.commit()
return int(cursor.lastrowid)
except sqlite3.IntegrityError:
return None
def update_passkey_usage(passkey_id: int, new_sign_count: int) -> bool:
now = get_cst_now_str()
with db_pool.get_db() as conn:
cursor = conn.cursor()
cursor.execute(
"""
UPDATE passkeys
SET sign_count = ?,
last_used_at = ?
WHERE id = ?
""",
(int(new_sign_count or 0), now, int(passkey_id)),
)
conn.commit()
return cursor.rowcount > 0
def delete_passkey(owner_type: str, owner_id: int, passkey_id: int) -> bool:
owner = _normalize_owner_type(owner_type)
with db_pool.get_db() as conn:
cursor = conn.cursor()
cursor.execute(
"DELETE FROM passkeys WHERE id = ? AND owner_type = ? AND owner_id = ?",
(int(passkey_id), owner, int(owner_id)),
)
conn.commit()
return cursor.rowcount > 0

View File

@@ -2,93 +2,12 @@
# -*- coding: utf-8 -*- # -*- coding: utf-8 -*-
from __future__ import annotations from __future__ import annotations
import json from datetime import datetime
from datetime import datetime, timedelta
import db_pool import db_pool
from services.schedule_utils import compute_next_run_at, format_cst from services.schedule_utils import compute_next_run_at, format_cst
from services.time_utils import get_beijing_now from services.time_utils import get_beijing_now
_SCHEDULE_DEFAULT_TIME = "08:00"
_SCHEDULE_DEFAULT_WEEKDAYS = "1,2,3,4,5"
_ALLOWED_SCHEDULE_UPDATE_FIELDS = (
"name",
"enabled",
"schedule_time",
"weekdays",
"browse_type",
"enable_screenshot",
"random_delay",
"account_ids",
)
_ALLOWED_EXEC_LOG_UPDATE_FIELDS = (
"total_accounts",
"success_accounts",
"failed_accounts",
"total_items",
"total_attachments",
"total_screenshots",
"duration_seconds",
"status",
"error_message",
)
def _normalize_limit(limit, default: int, *, minimum: int = 1) -> int:
try:
parsed = int(limit)
except Exception:
parsed = default
if parsed < minimum:
return minimum
return parsed
def _to_int(value, default: int = 0) -> int:
try:
return int(value)
except Exception:
return default
def _format_optional_datetime(dt: datetime | None) -> str | None:
if dt is None:
return None
return format_cst(dt)
def _serialize_account_ids(account_ids) -> str:
return json.dumps(account_ids) if account_ids else "[]"
def _compute_schedule_next_run_str(
*,
now_dt,
schedule_time,
weekdays,
random_delay,
last_run_at,
) -> str:
next_dt = compute_next_run_at(
now=now_dt,
schedule_time=str(schedule_time or _SCHEDULE_DEFAULT_TIME),
weekdays=str(weekdays or _SCHEDULE_DEFAULT_WEEKDAYS),
random_delay=_to_int(random_delay, 0),
last_run_at=str(last_run_at or "") if last_run_at else None,
)
return format_cst(next_dt)
def _map_schedule_log_row(row) -> dict:
log = dict(row)
log["created_at"] = log.get("execute_time")
log["success_count"] = log.get("success_accounts", 0)
log["failed_count"] = log.get("failed_accounts", 0)
log["duration"] = log.get("duration_seconds", 0)
return log
def get_user_schedules(user_id): def get_user_schedules(user_id):
"""获取用户的所有定时任务""" """获取用户的所有定时任务"""
@@ -125,10 +44,14 @@ def create_user_schedule(
account_ids=None, account_ids=None,
): ):
"""创建用户定时任务""" """创建用户定时任务"""
import json
with db_pool.get_db() as conn: with db_pool.get_db() as conn:
cursor = conn.cursor() cursor = conn.cursor()
cst_time = format_cst(get_beijing_now()) cst_time = format_cst(get_beijing_now())
account_ids_str = json.dumps(account_ids) if account_ids else "[]"
cursor.execute( cursor.execute(
""" """
INSERT INTO user_schedules ( INSERT INTO user_schedules (
@@ -143,8 +66,8 @@ def create_user_schedule(
weekdays, weekdays,
browse_type, browse_type,
enable_screenshot, enable_screenshot,
_to_int(random_delay, 0), int(random_delay or 0),
_serialize_account_ids(account_ids), account_ids_str,
cst_time, cst_time,
cst_time, cst_time,
), ),
@@ -156,11 +79,28 @@ def create_user_schedule(
def update_user_schedule(schedule_id, **kwargs): def update_user_schedule(schedule_id, **kwargs):
"""更新用户定时任务""" """更新用户定时任务"""
import json
with db_pool.get_db() as conn: with db_pool.get_db() as conn:
cursor = conn.cursor() cursor = conn.cursor()
now_dt = get_beijing_now() now_dt = get_beijing_now()
now_str = format_cst(now_dt) now_str = format_cst(now_dt)
updates = []
params = []
allowed_fields = [
"name",
"enabled",
"schedule_time",
"weekdays",
"browse_type",
"enable_screenshot",
"random_delay",
"account_ids",
]
# 读取旧值,用于决定是否需要重算 next_run_at
cursor.execute( cursor.execute(
""" """
SELECT enabled, schedule_time, weekdays, random_delay, last_run_at SELECT enabled, schedule_time, weekdays, random_delay, last_run_at
@@ -172,11 +112,10 @@ def update_user_schedule(schedule_id, **kwargs):
current = cursor.fetchone() current = cursor.fetchone()
if not current: if not current:
return False return False
current_enabled = int(current[0] or 0)
current_enabled = _to_int(current[0], 0)
current_time = current[1] current_time = current[1]
current_weekdays = current[2] current_weekdays = current[2]
current_random_delay = _to_int(current[3], 0) current_random_delay = int(current[3] or 0)
current_last_run_at = current[4] current_last_run_at = current[4]
will_enabled = current_enabled will_enabled = current_enabled
@@ -184,28 +123,21 @@ def update_user_schedule(schedule_id, **kwargs):
next_weekdays = current_weekdays next_weekdays = current_weekdays
next_random_delay = current_random_delay next_random_delay = current_random_delay
updates = [] for field in allowed_fields:
params = [] if field in kwargs:
value = kwargs[field]
for field in _ALLOWED_SCHEDULE_UPDATE_FIELDS: if field == "account_ids" and isinstance(value, list):
if field not in kwargs: value = json.dumps(value)
continue if field == "enabled":
will_enabled = 1 if value else 0
value = kwargs[field] if field == "schedule_time":
if field == "account_ids" and isinstance(value, list): next_time = value
value = json.dumps(value) if field == "weekdays":
next_weekdays = value
if field == "enabled": if field == "random_delay":
will_enabled = 1 if value else 0 next_random_delay = int(value or 0)
if field == "schedule_time": updates.append(f"{field} = ?")
next_time = value params.append(value)
if field == "weekdays":
next_weekdays = value
if field == "random_delay":
next_random_delay = int(value or 0)
updates.append(f"{field} = ?")
params.append(value)
if not updates: if not updates:
return False return False
@@ -213,26 +145,30 @@ def update_user_schedule(schedule_id, **kwargs):
updates.append("updated_at = ?") updates.append("updated_at = ?")
params.append(now_str) params.append(now_str)
config_changed = any(key in kwargs for key in ("schedule_time", "weekdays", "random_delay")) # 关键字段变更后重算 next_run_at确保索引驱动不会跑偏
#
# 需求:当用户修改“执行时间/执行日期/随机±15分钟”后即使今天已经执行过也允许按新配置在今天再次触发。
# 做法:这些关键字段发生变更时,重算 next_run_at 时忽略 last_run_at 的“同日仅一次”限制。
config_changed = any(key in kwargs for key in ["schedule_time", "weekdays", "random_delay"])
enabled_toggled = "enabled" in kwargs enabled_toggled = "enabled" in kwargs
should_recompute_next = config_changed or (enabled_toggled and will_enabled == 1) should_recompute_next = config_changed or (enabled_toggled and will_enabled == 1)
if should_recompute_next: if should_recompute_next:
next_run_at = _compute_schedule_next_run_str( next_dt = compute_next_run_at(
now_dt=now_dt, now=now_dt,
schedule_time=next_time, schedule_time=str(next_time or "08:00"),
weekdays=next_weekdays, weekdays=str(next_weekdays or "1,2,3,4,5"),
random_delay=next_random_delay, random_delay=int(next_random_delay or 0),
last_run_at=None if config_changed else current_last_run_at, last_run_at=None if config_changed else (str(current_last_run_at or "") if current_last_run_at else None),
) )
updates.append("next_run_at = ?") updates.append("next_run_at = ?")
params.append(next_run_at) params.append(format_cst(next_dt))
# 若本次显式禁用任务,则 next_run_at 清空(与 toggle 行为保持一致)
if enabled_toggled and will_enabled == 0: if enabled_toggled and will_enabled == 0:
updates.append("next_run_at = ?") updates.append("next_run_at = ?")
params.append(None) params.append(None)
params.append(schedule_id) params.append(schedule_id)
sql = f"UPDATE user_schedules SET {', '.join(updates)} WHERE id = ?" sql = f"UPDATE user_schedules SET {', '.join(updates)} WHERE id = ?"
cursor.execute(sql, params) cursor.execute(sql, params)
conn.commit() conn.commit()
@@ -267,19 +203,28 @@ def toggle_user_schedule(schedule_id, enabled):
) )
row = cursor.fetchone() row = cursor.fetchone()
if row: if row:
schedule_time, weekdays, random_delay, last_run_at, existing_next_run_at = row schedule_time, weekdays, random_delay, last_run_at, existing_next_run_at = (
existing_next_run_at = str(existing_next_run_at or "").strip() or None row[0],
row[1],
row[2],
row[3],
row[4],
)
existing_next_run_at = str(existing_next_run_at or "").strip() or None
# 若 next_run_at 已经被“修改配置”逻辑预先计算好且仍在未来,则优先沿用,
# 避免 last_run_at 的“同日仅一次”限制阻塞用户把任务调整到今天再次触发。
if existing_next_run_at and existing_next_run_at > now_str: if existing_next_run_at and existing_next_run_at > now_str:
next_run_at = existing_next_run_at next_run_at = existing_next_run_at
else: else:
next_run_at = _compute_schedule_next_run_str( next_dt = compute_next_run_at(
now_dt=now_dt, now=now_dt,
schedule_time=schedule_time, schedule_time=str(schedule_time or "08:00"),
weekdays=weekdays, weekdays=str(weekdays or "1,2,3,4,5"),
random_delay=random_delay, random_delay=int(random_delay or 0),
last_run_at=last_run_at, last_run_at=str(last_run_at or "") if last_run_at else None,
) )
next_run_at = format_cst(next_dt)
cursor.execute( cursor.execute(
""" """
@@ -327,15 +272,16 @@ def update_schedule_last_run(schedule_id):
row = cursor.fetchone() row = cursor.fetchone()
if not row: if not row:
return False return False
schedule_time, weekdays, random_delay = row[0], row[1], row[2]
schedule_time, weekdays, random_delay = row next_dt = compute_next_run_at(
next_run_at = _compute_schedule_next_run_str( now=now_dt,
now_dt=now_dt, schedule_time=str(schedule_time or "08:00"),
schedule_time=schedule_time, weekdays=str(weekdays or "1,2,3,4,5"),
weekdays=weekdays, random_delay=int(random_delay or 0),
random_delay=random_delay,
last_run_at=now_str, last_run_at=now_str,
) )
next_run_at = format_cst(next_dt)
cursor.execute( cursor.execute(
""" """
@@ -359,11 +305,7 @@ def update_schedule_next_run(schedule_id: int, next_run_at: str) -> bool:
SET next_run_at = ?, updated_at = ? SET next_run_at = ?, updated_at = ?
WHERE id = ? WHERE id = ?
""", """,
( (str(next_run_at or "").strip() or None, format_cst(get_beijing_now()), int(schedule_id)),
str(next_run_at or "").strip() or None,
format_cst(get_beijing_now()),
int(schedule_id),
),
) )
conn.commit() conn.commit()
return cursor.rowcount > 0 return cursor.rowcount > 0
@@ -386,15 +328,15 @@ def recompute_schedule_next_run(schedule_id: int, *, now_dt=None) -> bool:
if not row: if not row:
return False return False
schedule_time, weekdays, random_delay, last_run_at = row schedule_time, weekdays, random_delay, last_run_at = row[0], row[1], row[2], row[3]
next_run_at = _compute_schedule_next_run_str( next_dt = compute_next_run_at(
now_dt=now_dt, now=now_dt,
schedule_time=schedule_time, schedule_time=str(schedule_time or "08:00"),
weekdays=weekdays, weekdays=str(weekdays or "1,2,3,4,5"),
random_delay=random_delay, random_delay=int(random_delay or 0),
last_run_at=last_run_at, last_run_at=str(last_run_at or "") if last_run_at else None,
) )
return update_schedule_next_run(int(schedule_id), next_run_at) return update_schedule_next_run(int(schedule_id), format_cst(next_dt))
def get_due_user_schedules(now_cst: str, limit: int = 50): def get_due_user_schedules(now_cst: str, limit: int = 50):
@@ -403,8 +345,6 @@ def get_due_user_schedules(now_cst: str, limit: int = 50):
if not now_cst: if not now_cst:
now_cst = format_cst(get_beijing_now()) now_cst = format_cst(get_beijing_now())
safe_limit = _normalize_limit(limit, 50, minimum=1)
with db_pool.get_db() as conn: with db_pool.get_db() as conn:
cursor = conn.cursor() cursor = conn.cursor()
cursor.execute( cursor.execute(
@@ -418,7 +358,7 @@ def get_due_user_schedules(now_cst: str, limit: int = 50):
ORDER BY us.next_run_at ASC ORDER BY us.next_run_at ASC
LIMIT ? LIMIT ?
""", """,
(now_cst, safe_limit), (now_cst, int(limit)),
) )
return [dict(row) for row in cursor.fetchall()] return [dict(row) for row in cursor.fetchall()]
@@ -430,13 +370,15 @@ def create_schedule_execution_log(schedule_id, user_id, schedule_name):
"""创建定时任务执行日志""" """创建定时任务执行日志"""
with db_pool.get_db() as conn: with db_pool.get_db() as conn:
cursor = conn.cursor() cursor = conn.cursor()
execute_time = format_cst(get_beijing_now())
cursor.execute( cursor.execute(
""" """
INSERT INTO schedule_execution_logs ( INSERT INTO schedule_execution_logs (
schedule_id, user_id, schedule_name, execute_time, status schedule_id, user_id, schedule_name, execute_time, status
) VALUES (?, ?, ?, ?, 'running') ) VALUES (?, ?, ?, ?, 'running')
""", """,
(schedule_id, user_id, schedule_name, format_cst(get_beijing_now())), (schedule_id, user_id, schedule_name, execute_time),
) )
conn.commit() conn.commit()
@@ -451,11 +393,22 @@ def update_schedule_execution_log(log_id, **kwargs):
updates = [] updates = []
params = [] params = []
for field in _ALLOWED_EXEC_LOG_UPDATE_FIELDS: allowed_fields = [
if field not in kwargs: "total_accounts",
continue "success_accounts",
updates.append(f"{field} = ?") "failed_accounts",
params.append(kwargs[field]) "total_items",
"total_attachments",
"total_screenshots",
"duration_seconds",
"status",
"error_message",
]
for field in allowed_fields:
if field in kwargs:
updates.append(f"{field} = ?")
params.append(kwargs[field])
if not updates: if not updates:
return False return False
@@ -471,7 +424,6 @@ def update_schedule_execution_log(log_id, **kwargs):
def get_schedule_execution_logs(schedule_id, limit=10): def get_schedule_execution_logs(schedule_id, limit=10):
"""获取定时任务执行日志""" """获取定时任务执行日志"""
try: try:
safe_limit = _normalize_limit(limit, 10, minimum=1)
with db_pool.get_db() as conn: with db_pool.get_db() as conn:
cursor = conn.cursor() cursor = conn.cursor()
cursor.execute( cursor.execute(
@@ -481,16 +433,24 @@ def get_schedule_execution_logs(schedule_id, limit=10):
ORDER BY execute_time DESC ORDER BY execute_time DESC
LIMIT ? LIMIT ?
""", """,
(schedule_id, safe_limit), (schedule_id, limit),
) )
logs = [] logs = []
for row in cursor.fetchall(): rows = cursor.fetchall()
for row in rows:
try: try:
logs.append(_map_schedule_log_row(row)) log = dict(row)
log["created_at"] = log.get("execute_time")
log["success_count"] = log.get("success_accounts", 0)
log["failed_count"] = log.get("failed_accounts", 0)
log["duration"] = log.get("duration_seconds", 0)
logs.append(log)
except Exception as e: except Exception as e:
print(f"[数据库] 处理日志行时出错: {e}") print(f"[数据库] 处理日志行时出错: {e}")
continue continue
return logs return logs
except Exception as e: except Exception as e:
print(f"[数据库] 查询定时任务日志时出错: {e}") print(f"[数据库] 查询定时任务日志时出错: {e}")
@@ -502,7 +462,6 @@ def get_schedule_execution_logs(schedule_id, limit=10):
def get_user_all_schedule_logs(user_id, limit=50): def get_user_all_schedule_logs(user_id, limit=50):
"""获取用户所有定时任务的执行日志""" """获取用户所有定时任务的执行日志"""
safe_limit = _normalize_limit(limit, 50, minimum=1)
with db_pool.get_db() as conn: with db_pool.get_db() as conn:
cursor = conn.cursor() cursor = conn.cursor()
cursor.execute( cursor.execute(
@@ -512,7 +471,7 @@ def get_user_all_schedule_logs(user_id, limit=50):
ORDER BY execute_time DESC ORDER BY execute_time DESC
LIMIT ? LIMIT ?
""", """,
(user_id, safe_limit), (user_id, limit),
) )
return [dict(row) for row in cursor.fetchall()] return [dict(row) for row in cursor.fetchall()]
@@ -534,21 +493,14 @@ def delete_schedule_logs(schedule_id, user_id):
def clean_old_schedule_logs(days=30): def clean_old_schedule_logs(days=30):
"""清理指定天数前的定时任务执行日志""" """清理指定天数前的定时任务执行日志"""
safe_days = _to_int(days, 30)
if safe_days < 0:
safe_days = 0
cutoff_dt = get_beijing_now() - timedelta(days=safe_days)
cutoff_str = format_cst(cutoff_dt)
with db_pool.get_db() as conn: with db_pool.get_db() as conn:
cursor = conn.cursor() cursor = conn.cursor()
cursor.execute( cursor.execute(
""" """
DELETE FROM schedule_execution_logs DELETE FROM schedule_execution_logs
WHERE execute_time < ? WHERE execute_time < datetime('now', 'localtime', '-' || ? || ' days')
""", """,
(cutoff_str,), (days,),
) )
conn.commit() conn.commit()
return cursor.rowcount return cursor.rowcount

View File

@@ -74,25 +74,6 @@ def ensure_schema(conn) -> None:
""" """
) )
# Passkey 认证设备表(用户/管理员)
cursor.execute(
"""
CREATE TABLE IF NOT EXISTS passkeys (
id INTEGER PRIMARY KEY AUTOINCREMENT,
owner_type TEXT NOT NULL,
owner_id INTEGER NOT NULL,
device_name TEXT NOT NULL,
credential_id TEXT UNIQUE NOT NULL,
public_key TEXT NOT NULL,
sign_count INTEGER DEFAULT 0,
transports TEXT DEFAULT '',
aaguid TEXT DEFAULT '',
created_at TIMESTAMP DEFAULT CURRENT_TIMESTAMP,
last_used_at TIMESTAMP DEFAULT CURRENT_TIMESTAMP
)
"""
)
# ==================== 安全防护:威胁检测相关表 ==================== # ==================== 安全防护:威胁检测相关表 ====================
# 威胁事件日志表 # 威胁事件日志表
@@ -229,7 +210,6 @@ def ensure_schema(conn) -> None:
proxy_expire_minutes INTEGER DEFAULT 3, proxy_expire_minutes INTEGER DEFAULT 3,
max_screenshot_concurrent INTEGER DEFAULT 3, max_screenshot_concurrent INTEGER DEFAULT 3,
max_concurrent_per_account INTEGER DEFAULT 1, max_concurrent_per_account INTEGER DEFAULT 1,
db_slow_query_ms INTEGER DEFAULT 120,
schedule_weekdays TEXT DEFAULT '1,2,3,4,5,6,7', schedule_weekdays TEXT DEFAULT '1,2,3,4,5,6,7',
enable_screenshot INTEGER DEFAULT 1, enable_screenshot INTEGER DEFAULT 1,
auto_approve_enabled INTEGER DEFAULT 0, auto_approve_enabled INTEGER DEFAULT 0,
@@ -382,13 +362,8 @@ def ensure_schema(conn) -> None:
cursor.execute("CREATE INDEX IF NOT EXISTS idx_users_username ON users(username)") cursor.execute("CREATE INDEX IF NOT EXISTS idx_users_username ON users(username)")
cursor.execute("CREATE INDEX IF NOT EXISTS idx_users_status ON users(status)") cursor.execute("CREATE INDEX IF NOT EXISTS idx_users_status ON users(status)")
cursor.execute("CREATE INDEX IF NOT EXISTS idx_users_vip_expire ON users(vip_expire_time)") cursor.execute("CREATE INDEX IF NOT EXISTS idx_users_vip_expire ON users(vip_expire_time)")
cursor.execute("CREATE INDEX IF NOT EXISTS idx_users_email ON users(email)")
cursor.execute("CREATE INDEX IF NOT EXISTS idx_users_created_at ON users(created_at)")
cursor.execute("CREATE INDEX IF NOT EXISTS idx_users_status_created_at ON users(status, created_at)")
cursor.execute("CREATE INDEX IF NOT EXISTS idx_login_fingerprints_user ON login_fingerprints(user_id)") cursor.execute("CREATE INDEX IF NOT EXISTS idx_login_fingerprints_user ON login_fingerprints(user_id)")
cursor.execute("CREATE INDEX IF NOT EXISTS idx_login_ips_user ON login_ips(user_id)") cursor.execute("CREATE INDEX IF NOT EXISTS idx_login_ips_user ON login_ips(user_id)")
cursor.execute("CREATE INDEX IF NOT EXISTS idx_passkeys_owner ON passkeys(owner_type, owner_id)")
cursor.execute("CREATE INDEX IF NOT EXISTS idx_passkeys_owner_last_used ON passkeys(owner_type, owner_id, last_used_at)")
cursor.execute("CREATE INDEX IF NOT EXISTS idx_threat_events_created_at ON threat_events(created_at)") cursor.execute("CREATE INDEX IF NOT EXISTS idx_threat_events_created_at ON threat_events(created_at)")
cursor.execute("CREATE INDEX IF NOT EXISTS idx_threat_events_ip ON threat_events(ip)") cursor.execute("CREATE INDEX IF NOT EXISTS idx_threat_events_ip ON threat_events(ip)")
@@ -415,17 +390,12 @@ def ensure_schema(conn) -> None:
cursor.execute("CREATE INDEX IF NOT EXISTS idx_task_logs_user_id ON task_logs(user_id)") cursor.execute("CREATE INDEX IF NOT EXISTS idx_task_logs_user_id ON task_logs(user_id)")
cursor.execute("CREATE INDEX IF NOT EXISTS idx_task_logs_status ON task_logs(status)") cursor.execute("CREATE INDEX IF NOT EXISTS idx_task_logs_status ON task_logs(status)")
cursor.execute("CREATE INDEX IF NOT EXISTS idx_task_logs_status_created_at ON task_logs(status, created_at)")
cursor.execute("CREATE INDEX IF NOT EXISTS idx_task_logs_created_at ON task_logs(created_at)") cursor.execute("CREATE INDEX IF NOT EXISTS idx_task_logs_created_at ON task_logs(created_at)")
cursor.execute("CREATE INDEX IF NOT EXISTS idx_task_logs_source ON task_logs(source)")
cursor.execute("CREATE INDEX IF NOT EXISTS idx_task_logs_source_created_at ON task_logs(source, created_at)")
cursor.execute("CREATE INDEX IF NOT EXISTS idx_task_logs_user_date ON task_logs(user_id, created_at)") cursor.execute("CREATE INDEX IF NOT EXISTS idx_task_logs_user_date ON task_logs(user_id, created_at)")
cursor.execute("CREATE INDEX IF NOT EXISTS idx_bug_feedbacks_user_id ON bug_feedbacks(user_id)") cursor.execute("CREATE INDEX IF NOT EXISTS idx_bug_feedbacks_user_id ON bug_feedbacks(user_id)")
cursor.execute("CREATE INDEX IF NOT EXISTS idx_bug_feedbacks_status ON bug_feedbacks(status)") cursor.execute("CREATE INDEX IF NOT EXISTS idx_bug_feedbacks_status ON bug_feedbacks(status)")
cursor.execute("CREATE INDEX IF NOT EXISTS idx_bug_feedbacks_created_at ON bug_feedbacks(created_at)") cursor.execute("CREATE INDEX IF NOT EXISTS idx_bug_feedbacks_created_at ON bug_feedbacks(created_at)")
cursor.execute("CREATE INDEX IF NOT EXISTS idx_bug_feedbacks_status_created_at ON bug_feedbacks(status, created_at)")
cursor.execute("CREATE INDEX IF NOT EXISTS idx_bug_feedbacks_user_created_at ON bug_feedbacks(user_id, created_at)")
cursor.execute("CREATE INDEX IF NOT EXISTS idx_announcements_active ON announcements(is_active)") cursor.execute("CREATE INDEX IF NOT EXISTS idx_announcements_active ON announcements(is_active)")
cursor.execute("CREATE INDEX IF NOT EXISTS idx_announcements_created_at ON announcements(created_at)") cursor.execute("CREATE INDEX IF NOT EXISTS idx_announcements_created_at ON announcements(created_at)")
@@ -434,15 +404,11 @@ def ensure_schema(conn) -> None:
cursor.execute("CREATE INDEX IF NOT EXISTS idx_user_schedules_user_id ON user_schedules(user_id)") cursor.execute("CREATE INDEX IF NOT EXISTS idx_user_schedules_user_id ON user_schedules(user_id)")
cursor.execute("CREATE INDEX IF NOT EXISTS idx_user_schedules_enabled ON user_schedules(enabled)") cursor.execute("CREATE INDEX IF NOT EXISTS idx_user_schedules_enabled ON user_schedules(enabled)")
cursor.execute("CREATE INDEX IF NOT EXISTS idx_user_schedules_next_run ON user_schedules(next_run_at)") cursor.execute("CREATE INDEX IF NOT EXISTS idx_user_schedules_next_run ON user_schedules(next_run_at)")
cursor.execute("CREATE INDEX IF NOT EXISTS idx_user_schedules_enabled_next_run ON user_schedules(enabled, next_run_at)")
# 复合索引优化 # 复合索引优化
cursor.execute("CREATE INDEX IF NOT EXISTS idx_user_schedules_user_enabled ON user_schedules(user_id, enabled)") cursor.execute("CREATE INDEX IF NOT EXISTS idx_user_schedules_user_enabled ON user_schedules(user_id, enabled)")
cursor.execute("CREATE INDEX IF NOT EXISTS idx_schedule_execution_logs_schedule_id ON schedule_execution_logs(schedule_id)") cursor.execute("CREATE INDEX IF NOT EXISTS idx_schedule_execution_logs_schedule_id ON schedule_execution_logs(schedule_id)")
cursor.execute("CREATE INDEX IF NOT EXISTS idx_schedule_execution_logs_user_id ON schedule_execution_logs(user_id)") cursor.execute("CREATE INDEX IF NOT EXISTS idx_schedule_execution_logs_user_id ON schedule_execution_logs(user_id)")
cursor.execute("CREATE INDEX IF NOT EXISTS idx_schedule_execution_logs_status ON schedule_execution_logs(status)") cursor.execute("CREATE INDEX IF NOT EXISTS idx_schedule_execution_logs_status ON schedule_execution_logs(status)")
cursor.execute("CREATE INDEX IF NOT EXISTS idx_schedule_execution_logs_execute_time ON schedule_execution_logs(execute_time)")
cursor.execute("CREATE INDEX IF NOT EXISTS idx_schedule_execution_logs_schedule_time ON schedule_execution_logs(schedule_id, execute_time)")
cursor.execute("CREATE INDEX IF NOT EXISTS idx_schedule_execution_logs_user_time ON schedule_execution_logs(user_id, execute_time)")
# 初始化VIP配置幂等 # 初始化VIP配置幂等
try: try:

View File

@@ -3,82 +3,13 @@
from __future__ import annotations from __future__ import annotations
from datetime import timedelta from datetime import timedelta
from typing import Any, Dict, Optional from typing import Any, Optional
from typing import Dict
import db_pool import db_pool
from db.utils import get_cst_now, get_cst_now_str from db.utils import get_cst_now, get_cst_now_str
_THREAT_EVENT_SELECT_COLUMNS = """
id,
threat_type,
score,
rule,
field_name,
matched,
value_preview,
ip,
user_id,
request_method,
request_path,
user_agent,
created_at
"""
def _normalize_page(page: int) -> int:
try:
page_i = int(page)
except Exception:
page_i = 1
return max(1, page_i)
def _normalize_per_page(per_page: int, default: int = 20) -> int:
try:
value = int(per_page)
except Exception:
value = default
return max(1, min(200, value))
def _normalize_limit(limit: int, default: int = 50) -> int:
try:
value = int(limit)
except Exception:
value = default
return max(1, min(200, value))
def _row_value(row, key: str, index: int = 0, default=None):
if row is None:
return default
try:
return row[key]
except Exception:
try:
return row[index]
except Exception:
return default
def _fetch_threat_events_history(where_clause: str, params: tuple[Any, ...], limit_i: int) -> list[dict]:
with db_pool.get_db() as conn:
cursor = conn.cursor()
cursor.execute(
f"""
SELECT
{_THREAT_EVENT_SELECT_COLUMNS}
FROM threat_events
WHERE {where_clause}
ORDER BY created_at DESC, id DESC
LIMIT ?
""",
tuple(params) + (limit_i,),
)
return [dict(r) for r in cursor.fetchall()]
def record_login_context(user_id: int, ip_address: str, user_agent: str) -> Dict[str, bool]: def record_login_context(user_id: int, ip_address: str, user_agent: str) -> Dict[str, bool]:
"""记录登录环境信息,返回是否新设备/新IP。""" """记录登录环境信息,返回是否新设备/新IP。"""
user_id = int(user_id) user_id = int(user_id)
@@ -105,7 +36,7 @@ def record_login_context(user_id: int, ip_address: str, user_agent: str) -> Dict
SET last_seen = ?, last_ip = ? SET last_seen = ?, last_ip = ?
WHERE id = ? WHERE id = ?
""", """,
(now_str, ip_text, _row_value(row, "id", 0)), (now_str, ip_text, row["id"] if isinstance(row, dict) else row[0]),
) )
else: else:
cursor.execute( cursor.execute(
@@ -130,7 +61,7 @@ def record_login_context(user_id: int, ip_address: str, user_agent: str) -> Dict
SET last_seen = ? SET last_seen = ?
WHERE id = ? WHERE id = ?
""", """,
(now_str, _row_value(row, "id", 0)), (now_str, row["id"] if isinstance(row, dict) else row[0]),
) )
else: else:
cursor.execute( cursor.execute(
@@ -235,8 +166,15 @@ def _build_threat_events_where_clause(filters: Optional[dict]) -> tuple[str, lis
def get_threat_events_list(page: int, per_page: int, filters: Optional[dict] = None) -> dict: def get_threat_events_list(page: int, per_page: int, filters: Optional[dict] = None) -> dict:
"""分页获取威胁事件。""" """分页获取威胁事件。"""
page_i = _normalize_page(page) try:
per_page_i = _normalize_per_page(per_page, default=20) page_i = max(1, int(page))
except Exception:
page_i = 1
try:
per_page_i = int(per_page)
except Exception:
per_page_i = 20
per_page_i = max(1, min(200, per_page_i))
where_sql, params = _build_threat_events_where_clause(filters) where_sql, params = _build_threat_events_where_clause(filters)
offset = (page_i - 1) * per_page_i offset = (page_i - 1) * per_page_i
@@ -250,7 +188,19 @@ def get_threat_events_list(page: int, per_page: int, filters: Optional[dict] = N
cursor.execute( cursor.execute(
f""" f"""
SELECT SELECT
{_THREAT_EVENT_SELECT_COLUMNS} id,
threat_type,
score,
rule,
field_name,
matched,
value_preview,
ip,
user_id,
request_method,
request_path,
user_agent,
created_at
FROM threat_events FROM threat_events
{where_sql} {where_sql}
ORDER BY created_at DESC, id DESC ORDER BY created_at DESC, id DESC
@@ -268,20 +218,75 @@ def get_ip_threat_history(ip: str, limit: int = 50) -> list[dict]:
ip_text = str(ip or "").strip()[:64] ip_text = str(ip or "").strip()[:64]
if not ip_text: if not ip_text:
return [] return []
try:
limit_i = max(1, min(200, int(limit)))
except Exception:
limit_i = 50
limit_i = _normalize_limit(limit, default=50) with db_pool.get_db() as conn:
return _fetch_threat_events_history("ip = ?", (ip_text,), limit_i) cursor = conn.cursor()
cursor.execute(
"""
SELECT
id,
threat_type,
score,
rule,
field_name,
matched,
value_preview,
ip,
user_id,
request_method,
request_path,
user_agent,
created_at
FROM threat_events
WHERE ip = ?
ORDER BY created_at DESC, id DESC
LIMIT ?
""",
(ip_text, limit_i),
)
return [dict(r) for r in cursor.fetchall()]
def get_user_threat_history(user_id: int, limit: int = 50) -> list[dict]: def get_user_threat_history(user_id: int, limit: int = 50) -> list[dict]:
"""获取用户的威胁历史最近limit条""" """获取用户的威胁历史最近limit条"""
if user_id is None: if user_id is None:
return [] return []
try: try:
user_id_int = int(user_id) user_id_int = int(user_id)
except Exception: except Exception:
return [] return []
try:
limit_i = max(1, min(200, int(limit)))
except Exception:
limit_i = 50
limit_i = _normalize_limit(limit, default=50) with db_pool.get_db() as conn:
return _fetch_threat_events_history("user_id = ?", (user_id_int,), limit_i) cursor = conn.cursor()
cursor.execute(
"""
SELECT
id,
threat_type,
score,
rule,
field_name,
matched,
value_preview,
ip,
user_id,
request_method,
request_path,
user_agent,
created_at
FROM threat_events
WHERE user_id = ?
ORDER BY created_at DESC, id DESC
LIMIT ?
""",
(user_id_int, limit_i),
)
return [dict(r) for r in cursor.fetchall()]

View File

@@ -2,135 +2,12 @@
# -*- coding: utf-8 -*- # -*- coding: utf-8 -*-
from __future__ import annotations from __future__ import annotations
from datetime import datetime, timedelta from datetime import datetime
import pytz
import db_pool import db_pool
from db.utils import get_cst_now, get_cst_now_str, sanitize_sql_like_pattern from db.utils import sanitize_sql_like_pattern
_TASK_STATS_SELECT_SQL = """
SELECT
COUNT(*) as total_tasks,
SUM(CASE WHEN status = 'success' THEN 1 ELSE 0 END) as success_tasks,
SUM(CASE WHEN status = 'failed' THEN 1 ELSE 0 END) as failed_tasks,
SUM(total_items) as total_items,
SUM(total_attachments) as total_attachments
FROM task_logs
"""
_USER_RUN_STATS_SELECT_SQL = """
SELECT
SUM(CASE WHEN status = 'success' THEN 1 ELSE 0 END) as completed,
SUM(CASE WHEN status = 'failed' THEN 1 ELSE 0 END) as failed,
SUM(total_items) as total_items,
SUM(total_attachments) as total_attachments
FROM task_logs
"""
def _build_day_bounds(date_filter: str) -> tuple[str | None, str | None]:
"""将 YYYY-MM-DD 转换为 [day_start, day_end) 区间。"""
try:
day_start = datetime.strptime(str(date_filter), "%Y-%m-%d")
except Exception:
return None, None
day_end = day_start + timedelta(days=1)
return day_start.strftime("%Y-%m-%d %H:%M:%S"), day_end.strftime("%Y-%m-%d %H:%M:%S")
def _normalize_int(value, default: int, *, minimum: int | None = None) -> int:
try:
parsed = int(value)
except Exception:
parsed = default
if minimum is not None and parsed < minimum:
return minimum
return parsed
def _stat_value(row, key: str) -> int:
try:
value = row[key] if row else 0
except Exception:
value = 0
return int(value or 0)
def _build_task_logs_where_sql(
*,
date_filter=None,
status_filter=None,
source_filter=None,
user_id_filter=None,
account_filter=None,
) -> tuple[str, list]:
where_clauses = ["1=1"]
params = []
if date_filter:
day_start, day_end = _build_day_bounds(date_filter)
if day_start and day_end:
where_clauses.append("tl.created_at >= ? AND tl.created_at < ?")
params.extend([day_start, day_end])
else:
where_clauses.append("date(tl.created_at) = ?")
params.append(date_filter)
if status_filter:
where_clauses.append("tl.status = ?")
params.append(status_filter)
if source_filter:
source_filter = str(source_filter or "").strip()
if source_filter == "user_scheduled":
where_clauses.append("tl.source >= ? AND tl.source < ?")
params.extend(["user_scheduled:", "user_scheduled;"])
elif source_filter.endswith("*"):
prefix = source_filter[:-1]
safe_prefix = sanitize_sql_like_pattern(prefix)
where_clauses.append("tl.source LIKE ? ESCAPE '\\\\'")
params.append(f"{safe_prefix}%")
else:
where_clauses.append("tl.source = ?")
params.append(source_filter)
if user_id_filter:
where_clauses.append("tl.user_id = ?")
params.append(user_id_filter)
if account_filter:
safe_filter = sanitize_sql_like_pattern(account_filter)
where_clauses.append("tl.username LIKE ? ESCAPE '\\\\'")
params.append(f"%{safe_filter}%")
return " AND ".join(where_clauses), params
def _fetch_task_stats_row(cursor, *, where_clause: str = "", params: tuple | list = ()) -> dict:
sql = _TASK_STATS_SELECT_SQL
if where_clause:
sql = f"{sql}\nWHERE {where_clause}"
cursor.execute(sql, params)
row = cursor.fetchone()
return {
"total_tasks": _stat_value(row, "total_tasks"),
"success_tasks": _stat_value(row, "success_tasks"),
"failed_tasks": _stat_value(row, "failed_tasks"),
"total_items": _stat_value(row, "total_items"),
"total_attachments": _stat_value(row, "total_attachments"),
}
def _fetch_user_run_stats_row(cursor, *, where_clause: str, params: tuple | list) -> dict:
sql = f"{_USER_RUN_STATS_SELECT_SQL}\nWHERE {where_clause}"
cursor.execute(sql, params)
row = cursor.fetchone()
return {
"completed": _stat_value(row, "completed"),
"failed": _stat_value(row, "failed"),
"total_items": _stat_value(row, "total_items"),
"total_attachments": _stat_value(row, "total_attachments"),
}
def create_task_log( def create_task_log(
@@ -148,6 +25,8 @@ def create_task_log(
"""创建任务日志记录""" """创建任务日志记录"""
with db_pool.get_db() as conn: with db_pool.get_db() as conn:
cursor = conn.cursor() cursor = conn.cursor()
cst_tz = pytz.timezone("Asia/Shanghai")
cst_time = datetime.now(cst_tz).strftime("%Y-%m-%d %H:%M:%S")
cursor.execute( cursor.execute(
""" """
@@ -166,7 +45,7 @@ def create_task_log(
total_attachments, total_attachments,
error_message, error_message,
duration, duration,
get_cst_now_str(), cst_time,
source, source,
), ),
) )
@@ -185,27 +64,54 @@ def get_task_logs(
account_filter=None, account_filter=None,
): ):
"""获取任务日志列表(支持分页和多种筛选)""" """获取任务日志列表(支持分页和多种筛选)"""
limit = _normalize_int(limit, 100, minimum=1)
offset = _normalize_int(offset, 0, minimum=0)
with db_pool.get_db() as conn: with db_pool.get_db() as conn:
cursor = conn.cursor() cursor = conn.cursor()
where_sql, params = _build_task_logs_where_sql( where_clauses = ["1=1"]
date_filter=date_filter, params = []
status_filter=status_filter,
source_filter=source_filter, if date_filter:
user_id_filter=user_id_filter, where_clauses.append("date(tl.created_at) = ?")
account_filter=account_filter, params.append(date_filter)
)
if status_filter:
where_clauses.append("tl.status = ?")
params.append(status_filter)
if source_filter:
source_filter = str(source_filter or "").strip()
# 兼容“虚拟来源”:用于筛选 user_scheduled:batch_xxx 这类动态值
if source_filter == "user_scheduled":
where_clauses.append("tl.source LIKE ? ESCAPE '\\\\'")
params.append("user_scheduled:%")
elif source_filter.endswith("*"):
prefix = source_filter[:-1]
safe_prefix = sanitize_sql_like_pattern(prefix)
where_clauses.append("tl.source LIKE ? ESCAPE '\\\\'")
params.append(f"{safe_prefix}%")
else:
where_clauses.append("tl.source = ?")
params.append(source_filter)
if user_id_filter:
where_clauses.append("tl.user_id = ?")
params.append(user_id_filter)
if account_filter:
safe_filter = sanitize_sql_like_pattern(account_filter)
where_clauses.append("tl.username LIKE ? ESCAPE '\\\\'")
params.append(f"%{safe_filter}%")
where_sql = " AND ".join(where_clauses)
count_sql = f""" count_sql = f"""
SELECT COUNT(*) as total SELECT COUNT(*) as total
FROM task_logs tl FROM task_logs tl
LEFT JOIN users u ON tl.user_id = u.id
WHERE {where_sql} WHERE {where_sql}
""" """
cursor.execute(count_sql, params) cursor.execute(count_sql, params)
total = _stat_value(cursor.fetchone(), "total") total = cursor.fetchone()["total"]
data_sql = f""" data_sql = f"""
SELECT SELECT
@@ -217,10 +123,9 @@ def get_task_logs(
ORDER BY tl.created_at DESC ORDER BY tl.created_at DESC
LIMIT ? OFFSET ? LIMIT ? OFFSET ?
""" """
data_params = list(params) params.extend([limit, offset])
data_params.extend([limit, offset])
cursor.execute(data_sql, data_params) cursor.execute(data_sql, params)
logs = [dict(row) for row in cursor.fetchall()] logs = [dict(row) for row in cursor.fetchall()]
return {"logs": logs, "total": total} return {"logs": logs, "total": total}
@@ -228,39 +133,61 @@ def get_task_logs(
def get_task_stats(date_filter=None): def get_task_stats(date_filter=None):
"""获取任务统计信息""" """获取任务统计信息"""
if date_filter is None:
date_filter = get_cst_now().strftime("%Y-%m-%d")
day_start, day_end = _build_day_bounds(date_filter)
with db_pool.get_db() as conn: with db_pool.get_db() as conn:
cursor = conn.cursor() cursor = conn.cursor()
cst_tz = pytz.timezone("Asia/Shanghai")
if day_start and day_end: if date_filter is None:
today_stats = _fetch_task_stats_row( date_filter = datetime.now(cst_tz).strftime("%Y-%m-%d")
cursor,
where_clause="created_at >= ? AND created_at < ?",
params=(day_start, day_end),
)
else:
today_stats = _fetch_task_stats_row(
cursor,
where_clause="date(created_at) = ?",
params=(date_filter,),
)
total_stats = _fetch_task_stats_row(cursor) cursor.execute(
"""
SELECT
COUNT(*) as total_tasks,
SUM(CASE WHEN status = 'success' THEN 1 ELSE 0 END) as success_tasks,
SUM(CASE WHEN status = 'failed' THEN 1 ELSE 0 END) as failed_tasks,
SUM(total_items) as total_items,
SUM(total_attachments) as total_attachments
FROM task_logs
WHERE date(created_at) = ?
""",
(date_filter,),
)
today_stats = cursor.fetchone()
return {"today": today_stats, "total": total_stats} cursor.execute(
"""
SELECT
COUNT(*) as total_tasks,
SUM(CASE WHEN status = 'success' THEN 1 ELSE 0 END) as success_tasks,
SUM(CASE WHEN status = 'failed' THEN 1 ELSE 0 END) as failed_tasks,
SUM(total_items) as total_items,
SUM(total_attachments) as total_attachments
FROM task_logs
"""
)
total_stats = cursor.fetchone()
return {
"today": {
"total_tasks": today_stats["total_tasks"] or 0,
"success_tasks": today_stats["success_tasks"] or 0,
"failed_tasks": today_stats["failed_tasks"] or 0,
"total_items": today_stats["total_items"] or 0,
"total_attachments": today_stats["total_attachments"] or 0,
},
"total": {
"total_tasks": total_stats["total_tasks"] or 0,
"success_tasks": total_stats["success_tasks"] or 0,
"failed_tasks": total_stats["failed_tasks"] or 0,
"total_items": total_stats["total_items"] or 0,
"total_attachments": total_stats["total_attachments"] or 0,
},
}
def delete_old_task_logs(days=30, batch_size=1000): def delete_old_task_logs(days=30, batch_size=1000):
"""删除N天前的任务日志分批删除避免长时间锁表""" """删除N天前的任务日志分批删除避免长时间锁表"""
days = _normalize_int(days, 30, minimum=0)
batch_size = _normalize_int(batch_size, 1000, minimum=1)
cutoff = (get_cst_now() - timedelta(days=days)).strftime("%Y-%m-%d %H:%M:%S")
total_deleted = 0 total_deleted = 0
while True: while True:
with db_pool.get_db() as conn: with db_pool.get_db() as conn:
@@ -270,16 +197,16 @@ def delete_old_task_logs(days=30, batch_size=1000):
DELETE FROM task_logs DELETE FROM task_logs
WHERE rowid IN ( WHERE rowid IN (
SELECT rowid FROM task_logs SELECT rowid FROM task_logs
WHERE created_at < ? WHERE created_at < datetime('now', 'localtime', '-' || ? || ' days')
LIMIT ? LIMIT ?
) )
""", """,
(cutoff, batch_size), (days, batch_size),
) )
deleted = cursor.rowcount deleted = cursor.rowcount
conn.commit() conn.commit()
if deleted <= 0: if deleted == 0:
break break
total_deleted += deleted total_deleted += deleted
@@ -288,23 +215,31 @@ def delete_old_task_logs(days=30, batch_size=1000):
def get_user_run_stats(user_id, date_filter=None): def get_user_run_stats(user_id, date_filter=None):
"""获取用户的运行统计信息""" """获取用户的运行统计信息"""
if date_filter is None:
date_filter = get_cst_now().strftime("%Y-%m-%d")
day_start, day_end = _build_day_bounds(date_filter)
with db_pool.get_db() as conn: with db_pool.get_db() as conn:
cst_tz = pytz.timezone("Asia/Shanghai")
cursor = conn.cursor() cursor = conn.cursor()
if day_start and day_end: if date_filter is None:
return _fetch_user_run_stats_row( date_filter = datetime.now(cst_tz).strftime("%Y-%m-%d")
cursor,
where_clause="user_id = ? AND created_at >= ? AND created_at < ?",
params=(user_id, day_start, day_end),
)
return _fetch_user_run_stats_row( cursor.execute(
cursor, """
where_clause="user_id = ? AND date(created_at) = ?", SELECT
params=(user_id, date_filter), SUM(CASE WHEN status = 'success' THEN 1 ELSE 0 END) as completed,
SUM(CASE WHEN status = 'failed' THEN 1 ELSE 0 END) as failed,
SUM(total_items) as total_items,
SUM(total_attachments) as total_attachments
FROM task_logs
WHERE user_id = ? AND date(created_at) = ?
""",
(user_id, date_filter),
) )
stats = cursor.fetchone()
return {
"completed": stats["completed"] or 0,
"failed": stats["failed"] or 0,
"total_items": stats["total_items"] or 0,
"total_attachments": stats["total_attachments"] or 0,
}

View File

@@ -16,62 +16,8 @@ from password_utils import (
verify_password_bcrypt, verify_password_bcrypt,
verify_password_sha256, verify_password_sha256,
) )
logger = get_logger(__name__) logger = get_logger(__name__)
_CST_TZ = pytz.timezone("Asia/Shanghai")
_PERMANENT_VIP_EXPIRE = "2099-12-31 23:59:59"
_USER_LOOKUP_SQL = {
"id": "SELECT * FROM users WHERE id = ?",
"username": "SELECT * FROM users WHERE username = ?",
}
_USER_ADMIN_SAFE_COLUMNS = (
"id",
"username",
"email",
"email_verified",
"email_notify_enabled",
"kdocs_unit",
"kdocs_auto_upload",
"status",
"vip_expire_time",
"created_at",
"approved_at",
)
_USER_ADMIN_SAFE_COLUMNS_SQL = ", ".join(_USER_ADMIN_SAFE_COLUMNS)
def _row_to_dict(row):
return dict(row) if row else None
def _get_user_by_field(field_name: str, field_value):
query_sql = _USER_LOOKUP_SQL.get(str(field_name or ""))
if not query_sql:
raise ValueError(f"unsupported user lookup field: {field_name}")
with db_pool.get_db() as conn:
cursor = conn.cursor()
cursor.execute(query_sql, (field_value,))
return _row_to_dict(cursor.fetchone())
def _parse_cst_datetime(datetime_str: str | None):
if not datetime_str:
return None
try:
naive_dt = datetime.strptime(str(datetime_str), "%Y-%m-%d %H:%M:%S")
return _CST_TZ.localize(naive_dt)
except Exception:
return None
def _format_vip_expire(days: int, *, base_dt: datetime | None = None) -> str:
if int(days) == 999999:
return _PERMANENT_VIP_EXPIRE
if base_dt is None:
base_dt = datetime.now(_CST_TZ)
return (base_dt + timedelta(days=int(days))).strftime("%Y-%m-%d %H:%M:%S")
def get_vip_config(): def get_vip_config():
"""获取VIP配置""" """获取VIP配置"""
@@ -86,12 +32,13 @@ def set_default_vip_days(days):
"""设置默认VIP天数""" """设置默认VIP天数"""
with db_pool.get_db() as conn: with db_pool.get_db() as conn:
cursor = conn.cursor() cursor = conn.cursor()
cst_time = get_cst_now_str()
cursor.execute( cursor.execute(
""" """
INSERT OR REPLACE INTO vip_config (id, default_vip_days, updated_at) INSERT OR REPLACE INTO vip_config (id, default_vip_days, updated_at)
VALUES (1, ?, ?) VALUES (1, ?, ?)
""", """,
(days, get_cst_now_str()), (days, cst_time),
) )
conn.commit() conn.commit()
return True return True
@@ -100,8 +47,14 @@ def set_default_vip_days(days):
def set_user_vip(user_id, days): def set_user_vip(user_id, days):
"""设置用户VIP - days: 7=一周, 30=一个月, 365=一年, 999999=永久""" """设置用户VIP - days: 7=一周, 30=一个月, 365=一年, 999999=永久"""
with db_pool.get_db() as conn: with db_pool.get_db() as conn:
cst_tz = pytz.timezone("Asia/Shanghai")
cursor = conn.cursor() cursor = conn.cursor()
expire_time = _format_vip_expire(days)
if days == 999999:
expire_time = "2099-12-31 23:59:59"
else:
expire_time = (datetime.now(cst_tz) + timedelta(days=days)).strftime("%Y-%m-%d %H:%M:%S")
cursor.execute("UPDATE users SET vip_expire_time = ? WHERE id = ?", (expire_time, user_id)) cursor.execute("UPDATE users SET vip_expire_time = ? WHERE id = ?", (expire_time, user_id))
conn.commit() conn.commit()
return cursor.rowcount > 0 return cursor.rowcount > 0
@@ -110,26 +63,29 @@ def set_user_vip(user_id, days):
def extend_user_vip(user_id, days): def extend_user_vip(user_id, days):
"""延长用户VIP时间""" """延长用户VIP时间"""
user = get_user_by_id(user_id) user = get_user_by_id(user_id)
cst_tz = pytz.timezone("Asia/Shanghai")
if not user: if not user:
return False return False
current_expire = user.get("vip_expire_time")
now_dt = datetime.now(_CST_TZ)
if current_expire and current_expire != _PERMANENT_VIP_EXPIRE:
expire_time = _parse_cst_datetime(current_expire)
if expire_time is not None:
if expire_time < now_dt:
expire_time = now_dt
new_expire = _format_vip_expire(days, base_dt=expire_time)
else:
logger.warning("解析VIP过期时间失败使用当前时间")
new_expire = _format_vip_expire(days, base_dt=now_dt)
else:
new_expire = _format_vip_expire(days, base_dt=now_dt)
with db_pool.get_db() as conn: with db_pool.get_db() as conn:
cursor = conn.cursor() cursor = conn.cursor()
current_expire = user.get("vip_expire_time")
if current_expire and current_expire != "2099-12-31 23:59:59":
try:
expire_time_naive = datetime.strptime(current_expire, "%Y-%m-%d %H:%M:%S")
expire_time = cst_tz.localize(expire_time_naive)
now = datetime.now(cst_tz)
if expire_time < now:
expire_time = now
new_expire = (expire_time + timedelta(days=days)).strftime("%Y-%m-%d %H:%M:%S")
except (ValueError, AttributeError) as e:
logger.warning(f"解析VIP过期时间失败: {e}, 使用当前时间")
new_expire = (datetime.now(cst_tz) + timedelta(days=days)).strftime("%Y-%m-%d %H:%M:%S")
else:
new_expire = (datetime.now(cst_tz) + timedelta(days=days)).strftime("%Y-%m-%d %H:%M:%S")
cursor.execute("UPDATE users SET vip_expire_time = ? WHERE id = ?", (new_expire, user_id)) cursor.execute("UPDATE users SET vip_expire_time = ? WHERE id = ?", (new_expire, user_id))
conn.commit() conn.commit()
return cursor.rowcount > 0 return cursor.rowcount > 0
@@ -149,49 +105,45 @@ def is_user_vip(user_id):
注意数据库中存储的时间统一使用CSTAsia/Shanghai时区 注意数据库中存储的时间统一使用CSTAsia/Shanghai时区
""" """
cst_tz = pytz.timezone("Asia/Shanghai")
user = get_user_by_id(user_id) user = get_user_by_id(user_id)
if not user:
if not user or not user.get("vip_expire_time"):
return False return False
vip_expire_time = user.get("vip_expire_time") try:
if not vip_expire_time: expire_time_naive = datetime.strptime(user["vip_expire_time"], "%Y-%m-%d %H:%M:%S")
expire_time = cst_tz.localize(expire_time_naive)
now = datetime.now(cst_tz)
return now < expire_time
except (ValueError, AttributeError) as e:
logger.warning(f"检查VIP状态失败 (user_id={user_id}): {e}")
return False return False
expire_time = _parse_cst_datetime(vip_expire_time)
if expire_time is None:
logger.warning(f"检查VIP状态失败 (user_id={user_id}): 无法解析时间")
return False
return datetime.now(_CST_TZ) < expire_time
def get_user_vip_info(user_id): def get_user_vip_info(user_id):
"""获取用户VIP信息""" """获取用户VIP信息"""
cst_tz = pytz.timezone("Asia/Shanghai")
user = get_user_by_id(user_id) user = get_user_by_id(user_id)
if not user: if not user:
return {"is_vip": False, "expire_time": None, "days_left": 0, "username": ""} return {"is_vip": False, "expire_time": None, "days_left": 0, "username": ""}
vip_expire_time = user.get("vip_expire_time") vip_expire_time = user.get("vip_expire_time")
username = user.get("username", "")
if not vip_expire_time: if not vip_expire_time:
return {"is_vip": False, "expire_time": None, "days_left": 0, "username": username} return {"is_vip": False, "expire_time": None, "days_left": 0, "username": user.get("username", "")}
expire_time = _parse_cst_datetime(vip_expire_time) try:
if expire_time is None: expire_time_naive = datetime.strptime(vip_expire_time, "%Y-%m-%d %H:%M:%S")
logger.warning("VIP信息获取错误: 无法解析过期时间") expire_time = cst_tz.localize(expire_time_naive)
return {"is_vip": False, "expire_time": None, "days_left": 0, "username": username} now = datetime.now(cst_tz)
is_vip = now < expire_time
days_left = (expire_time - now).days if is_vip else 0
now_dt = datetime.now(_CST_TZ) return {"username": user.get("username", ""), "is_vip": is_vip, "expire_time": vip_expire_time, "days_left": max(0, days_left)}
is_vip = now_dt < expire_time except Exception as e:
days_left = (expire_time - now_dt).days if is_vip else 0 logger.warning(f"VIP信息获取错误: {e}")
return {"is_vip": False, "expire_time": None, "days_left": 0, "username": user.get("username", "")}
return {
"username": username,
"is_vip": is_vip,
"expire_time": vip_expire_time,
"days_left": max(0, days_left),
}
# ==================== 用户相关 ==================== # ==================== 用户相关 ====================
@@ -199,6 +151,8 @@ def get_user_vip_info(user_id):
def create_user(username, password, email=""): def create_user(username, password, email=""):
"""创建新用户(默认直接通过,赠送默认VIP)""" """创建新用户(默认直接通过,赠送默认VIP)"""
cst_tz = pytz.timezone("Asia/Shanghai")
with db_pool.get_db() as conn: with db_pool.get_db() as conn:
cursor = conn.cursor() cursor = conn.cursor()
password_hash = hash_password_bcrypt(password) password_hash = hash_password_bcrypt(password)
@@ -206,8 +160,12 @@ def create_user(username, password, email=""):
default_vip_days = get_vip_config()["default_vip_days"] default_vip_days = get_vip_config()["default_vip_days"]
vip_expire_time = None vip_expire_time = None
if int(default_vip_days or 0) > 0:
vip_expire_time = _format_vip_expire(int(default_vip_days)) if default_vip_days > 0:
if default_vip_days == 999999:
vip_expire_time = "2099-12-31 23:59:59"
else:
vip_expire_time = (datetime.now(cst_tz) + timedelta(days=default_vip_days)).strftime("%Y-%m-%d %H:%M:%S")
try: try:
cursor.execute( cursor.execute(
@@ -252,28 +210,28 @@ def verify_user(username, password):
def get_user_by_id(user_id): def get_user_by_id(user_id):
"""根据ID获取用户""" """根据ID获取用户"""
return _get_user_by_field("id", user_id) with db_pool.get_db() as conn:
cursor = conn.cursor()
cursor.execute("SELECT * FROM users WHERE id = ?", (user_id,))
user = cursor.fetchone()
return dict(user) if user else None
def get_user_kdocs_settings(user_id): def get_user_kdocs_settings(user_id):
"""获取用户的金山文档配置""" """获取用户的金山文档配置"""
with db_pool.get_db() as conn: user = get_user_by_id(user_id)
cursor = conn.cursor() if not user:
cursor.execute("SELECT kdocs_unit, kdocs_auto_upload FROM users WHERE id = ?", (user_id,)) return None
row = cursor.fetchone() return {
if not row: "kdocs_unit": user.get("kdocs_unit") or "",
return None "kdocs_auto_upload": 1 if user.get("kdocs_auto_upload") else 0,
return { }
"kdocs_unit": (row["kdocs_unit"] or "") if isinstance(row, dict) else (row[0] or ""),
"kdocs_auto_upload": 1 if ((row["kdocs_auto_upload"] if isinstance(row, dict) else row[1]) or 0) else 0,
}
def update_user_kdocs_settings(user_id, *, kdocs_unit=None, kdocs_auto_upload=None) -> bool: def update_user_kdocs_settings(user_id, *, kdocs_unit=None, kdocs_auto_upload=None) -> bool:
"""更新用户的金山文档配置""" """更新用户的金山文档配置"""
updates = [] updates = []
params = [] params = []
if kdocs_unit is not None: if kdocs_unit is not None:
updates.append("kdocs_unit = ?") updates.append("kdocs_unit = ?")
params.append(kdocs_unit) params.append(kdocs_unit)
@@ -294,66 +252,26 @@ def update_user_kdocs_settings(user_id, *, kdocs_unit=None, kdocs_auto_upload=No
def get_user_by_username(username): def get_user_by_username(username):
"""根据用户名获取用户""" """根据用户名获取用户"""
return _get_user_by_field("username", username)
def _normalize_limit_offset(limit, offset, *, max_limit: int = 500):
normalized_limit = None
if limit is not None:
try:
normalized_limit = int(limit)
except (TypeError, ValueError):
normalized_limit = 50
normalized_limit = max(1, min(normalized_limit, max_limit))
try:
normalized_offset = int(offset or 0)
except (TypeError, ValueError):
normalized_offset = 0
normalized_offset = max(0, normalized_offset)
return normalized_limit, normalized_offset
def get_users_count(*, status: str | None = None) -> int:
with db_pool.get_db() as conn: with db_pool.get_db() as conn:
cursor = conn.cursor() cursor = conn.cursor()
if status: cursor.execute("SELECT * FROM users WHERE username = ?", (username,))
cursor.execute("SELECT COUNT(*) AS count FROM users WHERE status = ?", (status,)) user = cursor.fetchone()
else: return dict(user) if user else None
cursor.execute("SELECT COUNT(*) AS count FROM users")
row = cursor.fetchone()
return int((row["count"] if row else 0) or 0)
def get_all_users(*, limit=None, offset=0): def get_all_users():
"""获取所有用户""" """获取所有用户"""
limit, offset = _normalize_limit_offset(limit, offset)
with db_pool.get_db() as conn: with db_pool.get_db() as conn:
cursor = conn.cursor() cursor = conn.cursor()
sql = f"SELECT {_USER_ADMIN_SAFE_COLUMNS_SQL} FROM users ORDER BY created_at DESC" cursor.execute("SELECT * FROM users ORDER BY created_at DESC")
params = []
if limit is not None:
sql += " LIMIT ? OFFSET ?"
params.extend([limit, offset])
cursor.execute(sql, params)
return [dict(row) for row in cursor.fetchall()] return [dict(row) for row in cursor.fetchall()]
def get_pending_users(*, limit=None, offset=0): def get_pending_users():
"""获取待审核用户""" """获取待审核用户"""
limit, offset = _normalize_limit_offset(limit, offset)
with db_pool.get_db() as conn: with db_pool.get_db() as conn:
cursor = conn.cursor() cursor = conn.cursor()
sql = ( cursor.execute("SELECT * FROM users WHERE status = 'pending' ORDER BY created_at DESC")
f"SELECT {_USER_ADMIN_SAFE_COLUMNS_SQL} "
"FROM users WHERE status = 'pending' ORDER BY created_at DESC"
)
params = []
if limit is not None:
sql += " LIMIT ? OFFSET ?"
params.extend([limit, offset])
cursor.execute(sql, params)
return [dict(row) for row in cursor.fetchall()] return [dict(row) for row in cursor.fetchall()]
@@ -361,13 +279,14 @@ def approve_user(user_id):
"""审核通过用户""" """审核通过用户"""
with db_pool.get_db() as conn: with db_pool.get_db() as conn:
cursor = conn.cursor() cursor = conn.cursor()
cst_time = get_cst_now_str()
cursor.execute( cursor.execute(
""" """
UPDATE users UPDATE users
SET status = 'approved', approved_at = ? SET status = 'approved', approved_at = ?
WHERE id = ? WHERE id = ?
""", """,
(get_cst_now_str(), user_id), (cst_time, user_id),
) )
conn.commit() conn.commit()
return cursor.rowcount > 0 return cursor.rowcount > 0
@@ -396,5 +315,5 @@ def get_user_stats(user_id):
with db_pool.get_db() as conn: with db_pool.get_db() as conn:
cursor = conn.cursor() cursor = conn.cursor()
cursor.execute("SELECT COUNT(*) as count FROM accounts WHERE user_id = ?", (user_id,)) cursor.execute("SELECT COUNT(*) as count FROM accounts WHERE user_id = ?", (user_id,))
row = cursor.fetchone() account_count = cursor.fetchone()["count"]
return {"account_count": int((row["count"] if row else 0) or 0)} return {"account_count": account_count}

View File

@@ -7,149 +7,8 @@
import sqlite3 import sqlite3
import threading import threading
from queue import Queue, Empty
import time import time
from queue import Empty, Full, Queue
from app_config import get_config
from app_logger import get_logger
logger = get_logger("database")
config = get_config()
DB_CONNECT_TIMEOUT_SECONDS = max(1, int(getattr(config, "DB_CONNECT_TIMEOUT_SECONDS", 10)))
DB_BUSY_TIMEOUT_MS = max(1000, int(getattr(config, "DB_BUSY_TIMEOUT_MS", 10000)))
DB_CACHE_SIZE_KB = max(1024, int(getattr(config, "DB_CACHE_SIZE_KB", 8192)))
DB_WAL_AUTOCHECKPOINT_PAGES = max(100, int(getattr(config, "DB_WAL_AUTOCHECKPOINT_PAGES", 1000)))
DB_MMAP_SIZE_MB = max(0, int(getattr(config, "DB_MMAP_SIZE_MB", 256)))
DB_LOCK_RETRY_COUNT = max(0, int(getattr(config, "DB_LOCK_RETRY_COUNT", 3)))
DB_LOCK_RETRY_BASE_MS = max(10, int(getattr(config, "DB_LOCK_RETRY_BASE_MS", 50)))
DB_SLOW_QUERY_MS = max(0, int(getattr(config, "DB_SLOW_QUERY_MS", 120)))
DB_SLOW_QUERY_SQL_MAX_LEN = max(80, int(getattr(config, "DB_SLOW_QUERY_SQL_MAX_LEN", 240)))
_slow_query_runtime_lock = threading.Lock()
_slow_query_runtime_threshold_ms = DB_SLOW_QUERY_MS
_slow_query_runtime_sql_max_len = DB_SLOW_QUERY_SQL_MAX_LEN
def _get_slow_query_runtime_values() -> tuple[int, int]:
with _slow_query_runtime_lock:
return int(_slow_query_runtime_threshold_ms), int(_slow_query_runtime_sql_max_len)
def get_slow_query_runtime() -> dict:
threshold_ms, sql_max_len = _get_slow_query_runtime_values()
return {"threshold_ms": threshold_ms, "sql_max_len": sql_max_len}
def configure_slow_query_runtime(*, threshold_ms=None, sql_max_len=None) -> dict:
global _slow_query_runtime_threshold_ms, _slow_query_runtime_sql_max_len
with _slow_query_runtime_lock:
if threshold_ms is not None:
_slow_query_runtime_threshold_ms = max(0, int(threshold_ms))
if sql_max_len is not None:
_slow_query_runtime_sql_max_len = max(80, int(sql_max_len))
runtime_threshold_ms = int(_slow_query_runtime_threshold_ms)
runtime_sql_max_len = int(_slow_query_runtime_sql_max_len)
try:
from services.slow_sql_metrics import configure_slow_sql_runtime
configure_slow_sql_runtime(
threshold_ms=runtime_threshold_ms,
sql_max_len=runtime_sql_max_len,
)
except Exception:
pass
return {"threshold_ms": runtime_threshold_ms, "sql_max_len": runtime_sql_max_len}
def _is_lock_conflict_error(error: sqlite3.OperationalError) -> bool:
message = str(error or "").lower()
return ("locked" in message) or ("busy" in message)
def _compact_sql(sql: str) -> str:
_, sql_max_len = _get_slow_query_runtime_values()
statement = " ".join(str(sql or "").split())
if len(statement) <= sql_max_len:
return statement
return statement[: sql_max_len - 3] + "..."
def _describe_params(parameters) -> str:
if parameters is None:
return "none"
if isinstance(parameters, dict):
return f"dict[{len(parameters)}]"
if isinstance(parameters, (list, tuple)):
return f"{type(parameters).__name__}[{len(parameters)}]"
return type(parameters).__name__
class TracedCursor:
"""带慢查询检测的游标包装器"""
def __init__(self, cursor, on_query_executed):
self._cursor = cursor
self._on_query_executed = on_query_executed
def _trace(self, sql, parameters, execute_fn):
start = time.perf_counter()
try:
execute_fn()
finally:
elapsed_ms = (time.perf_counter() - start) * 1000.0
try:
self._on_query_executed(sql, parameters, elapsed_ms)
except Exception:
pass
def execute(self, sql, parameters=None):
if parameters is None:
self._trace(sql, None, lambda: self._cursor.execute(sql))
else:
self._trace(sql, parameters, lambda: self._cursor.execute(sql, parameters))
return self
def executemany(self, sql, seq_of_parameters):
self._trace(sql, seq_of_parameters, lambda: self._cursor.executemany(sql, seq_of_parameters))
return self
def executescript(self, sql_script):
self._trace(sql_script, None, lambda: self._cursor.executescript(sql_script))
return self
def fetchone(self):
return self._cursor.fetchone()
def fetchall(self):
return self._cursor.fetchall()
def fetchmany(self, size=None):
if size is None:
return self._cursor.fetchmany()
return self._cursor.fetchmany(size)
def close(self):
return self._cursor.close()
@property
def rowcount(self):
return self._cursor.rowcount
@property
def lastrowid(self):
return self._cursor.lastrowid
def __iter__(self):
return iter(self._cursor)
def __getattr__(self, item):
return getattr(self._cursor, item)
class ConnectionPool: class ConnectionPool:
@@ -183,70 +42,14 @@ class ConnectionPool:
def _create_connection(self): def _create_connection(self):
"""创建新的数据库连接""" """创建新的数据库连接"""
conn = sqlite3.connect( conn = sqlite3.connect(self.database, check_same_thread=False)
self.database,
check_same_thread=False,
timeout=DB_CONNECT_TIMEOUT_SECONDS,
)
conn.row_factory = sqlite3.Row conn.row_factory = sqlite3.Row
pragma_statements = [ # 设置WAL模式提高并发性能
"PRAGMA foreign_keys=ON", conn.execute('PRAGMA journal_mode=WAL')
"PRAGMA journal_mode=WAL", # 设置合理的超时时间
"PRAGMA synchronous=NORMAL", conn.execute('PRAGMA busy_timeout=5000')
f"PRAGMA busy_timeout={DB_BUSY_TIMEOUT_MS}",
"PRAGMA temp_store=MEMORY",
f"PRAGMA cache_size={-DB_CACHE_SIZE_KB}",
f"PRAGMA wal_autocheckpoint={DB_WAL_AUTOCHECKPOINT_PAGES}",
]
if DB_MMAP_SIZE_MB > 0:
pragma_statements.append(f"PRAGMA mmap_size={DB_MMAP_SIZE_MB * 1024 * 1024}")
for statement in pragma_statements:
try:
conn.execute(statement)
except sqlite3.DatabaseError as e:
logger.warning(f"设置数据库参数失败 ({statement}): {e}")
return conn return conn
def _close_connection(self, conn) -> None:
if conn is None:
return
try:
conn.close()
except Exception as e:
logger.warning(f"关闭连接失败: {e}")
def _is_connection_healthy(self, conn) -> bool:
if conn is None:
return False
try:
conn.rollback()
conn.execute("SELECT 1")
return True
except sqlite3.Error as e:
logger.warning(f"连接健康检查失败(数据库错误): {e}")
except Exception as e:
logger.warning(f"连接健康检查失败(未知错误): {e}")
return False
def _replenish_pool_if_needed(self) -> None:
with self._lock:
if self._pool.qsize() >= self.pool_size:
return
new_conn = None
try:
new_conn = self._create_connection()
self._pool.put(new_conn, block=False)
self._created_connections += 1
except Full:
if new_conn:
self._close_connection(new_conn)
except Exception as e:
if new_conn:
self._close_connection(new_conn)
logger.warning(f"重建连接失败: {e}")
def get_connection(self): def get_connection(self):
""" """
从连接池获取连接 从连接池获取连接
@@ -267,20 +70,57 @@ class ConnectionPool:
Args: Args:
conn: 要归还的连接 conn: 要归还的连接
""" """
import sqlite3
from queue import Full
if conn is None: if conn is None:
return return
if self._is_connection_healthy(conn): connection_healthy = False
try:
# 回滚任何未提交的事务
conn.rollback()
# 安全修复:验证连接是否健康,防止损坏的连接污染连接池
conn.execute("SELECT 1")
connection_healthy = True
except sqlite3.Error as e:
# 数据库相关错误,连接可能损坏
print(f"连接健康检查失败(数据库错误): {e}")
except Exception as e:
print(f"连接健康检查失败(未知错误): {e}")
if connection_healthy:
try: try:
self._pool.put(conn, block=False) self._pool.put(conn, block=False)
return return # 成功归还
except Full: except Full:
logger.warning("连接池已满,关闭多余连接") # 队列已满(不应该发生,但处理它)
self._close_connection(conn) print(f"警告: 连接池已满,关闭多余连接")
return connection_healthy = False # 标记为需要关闭
self._close_connection(conn) # 连接不健康或队列已满,关闭它
self._replenish_pool_if_needed() try:
conn.close()
except Exception as close_error:
print(f"关闭连接失败: {close_error}")
# 如果连接不健康,尝试创建新连接补充池
if not connection_healthy:
with self._lock:
# 双重检查:确保池确实需要补充
if self._pool.qsize() < self.pool_size:
try:
new_conn = self._create_connection()
self._created_connections += 1
self._pool.put(new_conn, block=False)
except Full:
# 在获取锁期间池被填满了,关闭新建的连接
try:
new_conn.close()
except Exception:
pass
except Exception as create_error:
print(f"重建连接失败: {create_error}")
def close_all(self): def close_all(self):
"""关闭所有连接""" """关闭所有连接"""
@@ -289,15 +129,15 @@ class ConnectionPool:
conn = self._pool.get(block=False) conn = self._pool.get(block=False)
conn.close() conn.close()
except Exception as e: except Exception as e:
logger.warning(f"关闭连接失败: {e}") print(f"关闭连接失败: {e}")
def get_stats(self): def get_stats(self):
"""获取连接池统计信息""" """获取连接池统计信息"""
return { return {
"pool_size": self.pool_size, 'pool_size': self.pool_size,
"available": self._pool.qsize(), 'available': self._pool.qsize(),
"in_use": self.pool_size - self._pool.qsize(), 'in_use': self.pool_size - self._pool.qsize(),
"total_created": self._created_connections, 'total_created': self._created_connections
} }
@@ -324,60 +164,31 @@ class PooledConnection:
"""with语句结束时自动归还连接 [已修复Bug#3]""" """with语句结束时自动归还连接 [已修复Bug#3]"""
try: try:
if exc_type is not None: if exc_type is not None:
# 发生异常,回滚事务
self._conn.rollback() self._conn.rollback()
logger.warning(f"数据库事务已回滚: {exc_type.__name__}") print(f"数据库事务已回滚: {exc_type.__name__}")
# 注意: 不自动commit要求用户显式调用conn.commit()
if self._cursor is not None: if self._cursor:
self._cursor.close() self._cursor.close()
self._cursor = None self._cursor = None
except Exception as e: except Exception as e:
logger.warning(f"关闭游标失败: {e}") print(f"关闭游标失败: {e}")
finally: finally:
# 归还连接
self._pool.return_connection(self._conn) self._pool.return_connection(self._conn)
return False return False # 不抑制异常
def _on_query_executed(self, sql: str, parameters, elapsed_ms: float) -> None:
slow_query_ms, _ = _get_slow_query_runtime_values()
if slow_query_ms <= 0:
return
if elapsed_ms < slow_query_ms:
return
params_info = _describe_params(parameters)
try:
from services.slow_sql_metrics import record_slow_sql
record_slow_sql(sql=sql, duration_ms=elapsed_ms, params_info=params_info)
except Exception:
pass
logger.warning(f"[慢SQL] {elapsed_ms:.1f}ms sql=\"{_compact_sql(sql)}\" params={params_info}")
def cursor(self): def cursor(self):
"""获取游标""" """获取游标"""
if self._cursor is None: if self._cursor is None:
raw_cursor = self._conn.cursor() self._cursor = self._conn.cursor()
self._cursor = TracedCursor(raw_cursor, self._on_query_executed)
return self._cursor return self._cursor
def commit(self): def commit(self):
"""提交事务""" """提交事务"""
for attempt in range(DB_LOCK_RETRY_COUNT + 1): self._conn.commit()
try:
self._conn.commit()
return
except sqlite3.OperationalError as e:
if (not _is_lock_conflict_error(e)) or attempt >= DB_LOCK_RETRY_COUNT:
raise
sleep_seconds = (DB_LOCK_RETRY_BASE_MS * (2**attempt)) / 1000.0
logger.warning(
f"数据库提交遇到锁冲突,{sleep_seconds:.3f}s 后重试 "
f"({attempt + 1}/{DB_LOCK_RETRY_COUNT})"
)
time.sleep(sleep_seconds)
def rollback(self): def rollback(self):
"""回滚事务""" """回滚事务"""
@@ -386,9 +197,9 @@ class PooledConnection:
def execute(self, sql, parameters=None): def execute(self, sql, parameters=None):
"""执行SQL""" """执行SQL"""
cursor = self.cursor() cursor = self.cursor()
if parameters is None: if parameters:
return cursor.execute(sql) return cursor.execute(sql, parameters)
return cursor.execute(sql, parameters) return cursor.execute(sql)
def fetchone(self): def fetchone(self):
"""获取一行""" """获取一行"""
@@ -434,7 +245,7 @@ def init_pool(database, pool_size=5):
with _pool_lock: with _pool_lock:
if _pool is None: if _pool is None:
_pool = ConnectionPool(database, pool_size) _pool = ConnectionPool(database, pool_size)
logger.info(f"[OK] 数据库连接池已初始化 (大小: {pool_size})") print(f" 数据库连接池已初始化 (大小: {pool_size})")
def get_db(): def get_db():

View File

@@ -7,77 +7,51 @@ services:
ports: ports:
- "51232:51233" - "51232:51233"
volumes: volumes:
- ./data:/app/data # 数据库持久化 - ./data:/app/data
- ./logs:/app/logs # 日志持久化 - ./logs:/app/logs
- ./截图:/app/截图 # 截图持久化 - ./截图:/app/截图
- /etc/localtime:/etc/localtime:ro # 时区同步 - ./playwright:/ms-playwright
- ./static:/app/static # 静态文件(实时更新) - /etc/localtime:/etc/localtime:ro
- ./templates:/app/templates # 模板文件(实时更新) - ./static:/app/static
- ./app.py:/app/app.py # 主程序(实时更新) - ./templates:/app/templates
- ./database.py:/app/database.py # 数据库模块(实时更新) - ./app.py:/app/app.py
- ./crypto_utils.py:/app/crypto_utils.py # 加密工具(实时更新) - ./database.py:/app/database.py
# 代码热更新
- ./services:/app/services
- ./routes:/app/routes
- ./db:/app/db
- ./security:/app/security
- ./realtime:/app/realtime
- ./api_browser.py:/app/api_browser.py
- ./app_config.py:/app/app_config.py
- ./app_logger.py:/app/app_logger.py
- ./app_security.py:/app/app_security.py
- ./browser_pool_worker.py:/app/browser_pool_worker.py
- ./crypto_utils.py:/app/crypto_utils.py
- ./db_pool.py:/app/db_pool.py
- ./email_service.py:/app/email_service.py
- ./password_utils.py:/app/password_utils.py
- ./playwright_automation.py:/app/playwright_automation.py
- ./task_checkpoint.py:/app/task_checkpoint.py
dns: dns:
- 223.5.5.5 - 223.5.5.5
- 114.114.114.114 - 114.114.114.114
- 119.29.29.29
environment: environment:
- TZ=Asia/Shanghai - TZ=Asia/Shanghai
- PYTHONUNBUFFERED=1 - PYTHONUNBUFFERED=1
# Flask 配置 - PLAYWRIGHT_BROWSERS_PATH=/ms-playwright
- FLASK_ENV=production - FLASK_ENV=production
- FLASK_DEBUG=false
- SOCKETIO_ASYNC_MODE=eventlet
# 服务器配置
- SERVER_HOST=0.0.0.0 - SERVER_HOST=0.0.0.0
- SERVER_PORT=51233 - SERVER_PORT=51233
# 数据库配置
- DB_FILE=data/app_data.db
- DB_POOL_SIZE=5
- SYSTEM_CONFIG_CACHE_TTL_SECONDS=30
- DB_SLOW_QUERY_MS=120
- DB_SLOW_SQL_WINDOW_SECONDS=86400
- DB_SLOW_SQL_TOP_LIMIT=12
- DB_SLOW_SQL_RECENT_LIMIT=50
- DB_SLOW_SQL_MAX_EVENTS=20000
- ADMIN_SLOW_SQL_METRICS_CACHE_TTL_SECONDS=3
# 并发控制配置
- MAX_CONCURRENT_GLOBAL=2
- MAX_CONCURRENT_PER_ACCOUNT=1
- MAX_CONCURRENT_CONTEXTS=100
# 安全配置
# 加密密钥配置(重要!防止容器重建时丢失密钥)
- ENCRYPTION_KEY_RAW=${ENCRYPTION_KEY_RAW}
- SESSION_LIFETIME_HOURS=24
- SESSION_COOKIE_SECURE=true
- HTTPS_ENABLED=true
- MAX_CAPTCHA_ATTEMPTS=5
- MAX_IP_ATTEMPTS_PER_HOUR=10
# 日志配置
- LOG_LEVEL=INFO - LOG_LEVEL=INFO
- SECURITY_LOG_ALLOW_STRATEGY=0
- LOG_FILE=logs/app.log
- API_DIAGNOSTIC_LOG=0
- API_DIAGNOSTIC_SLOW_MS=0
# 状态推送节流(秒)
- STATUS_PUSH_INTERVAL_SECONDS=2
# wkhtmltoimage 截图配置
- WKHTMLTOIMAGE_FULL_PAGE=0
# 知识管理平台配置
- ZSGL_LOGIN_URL=https://postoa.aidunsoft.com/admin/login.aspx
- ZSGL_INDEX_URL_PATTERN=index.aspx
- PAGE_LOAD_TIMEOUT=60000
restart: unless-stopped restart: unless-stopped
shm_size: 2gb # 为Chromium分配共享内存 shm_size: 2gb
mem_limit: 4g
# 内存和CPU资源限制 mem_reservation: 2g
mem_limit: 4g # 硬限制:最大4GB内存 cpus: '2.0'
mem_reservation: 2g # 软限制:预留2GB
cpus: '2.0' # 限制使用2个CPU核心
# 健康检查(可选)
healthcheck: healthcheck:
test: ["CMD-SHELL", "curl -f http://localhost:51233 || exit 1"] test: ["CMD-SHELL", "curl -f http://localhost:51233 || exit 1"]
interval: 30s interval: 5m
timeout: 10s timeout: 10s
retries: 3 retries: 3
start_period: 40s start_period: 40s

File diff suppressed because it is too large Load Diff

1591
playwright_automation.py Executable file

File diff suppressed because it is too large Load Diff

View File

@@ -2,7 +2,6 @@
# -*- coding: utf-8 -*- # -*- coding: utf-8 -*-
from __future__ import annotations from __future__ import annotations
import json
import os import os
import time import time
@@ -10,40 +9,8 @@ from services.runtime import get_logger, get_socketio
from services.state import safe_get_account, safe_iter_task_status_items from services.state import safe_get_account, safe_iter_task_status_items
def _to_int(value, default: int = 0) -> int:
try:
return int(value)
except Exception:
return int(default)
def _payload_signature(payload: dict) -> str:
try:
return json.dumps(payload, ensure_ascii=False, sort_keys=True, separators=(",", ":"), default=str)
except Exception:
return repr(payload)
def _should_emit(
*,
last_sig: str | None,
last_ts: float,
new_sig: str,
now_ts: float,
min_interval: float,
force_interval: float,
) -> bool:
if last_sig is None:
return True
if (now_ts - last_ts) >= force_interval:
return True
if new_sig != last_sig and (now_ts - last_ts) >= min_interval:
return True
return False
def status_push_worker() -> None: def status_push_worker() -> None:
"""后台线程:按间隔推送排队/运行中任务状态(变更驱动+心跳兜底)。""" """后台线程:按间隔推送排队/运行中任务状态更新(可节流)。"""
logger = get_logger() logger = get_logger()
try: try:
push_interval = float(os.environ.get("STATUS_PUSH_INTERVAL_SECONDS", "1")) push_interval = float(os.environ.get("STATUS_PUSH_INTERVAL_SECONDS", "1"))
@@ -51,41 +18,18 @@ def status_push_worker() -> None:
push_interval = 1.0 push_interval = 1.0
push_interval = max(0.5, push_interval) push_interval = max(0.5, push_interval)
try:
queue_min_interval = float(os.environ.get("STATUS_PUSH_MIN_QUEUE_INTERVAL_SECONDS", str(push_interval)))
except Exception:
queue_min_interval = push_interval
queue_min_interval = max(push_interval, queue_min_interval)
try:
progress_min_interval = float(
os.environ.get("STATUS_PUSH_MIN_PROGRESS_INTERVAL_SECONDS", str(push_interval))
)
except Exception:
progress_min_interval = push_interval
progress_min_interval = max(push_interval, progress_min_interval)
try:
force_interval = float(os.environ.get("STATUS_PUSH_FORCE_INTERVAL_SECONDS", "10"))
except Exception:
force_interval = 10.0
force_interval = max(push_interval, force_interval)
socketio = get_socketio() socketio = get_socketio()
from services.tasks import get_task_scheduler from services.tasks import get_task_scheduler
scheduler = get_task_scheduler() scheduler = get_task_scheduler()
emitted_state: dict[str, dict] = {}
while True: while True:
try: try:
now_ts = time.time()
queue_snapshot = scheduler.get_queue_state_snapshot() queue_snapshot = scheduler.get_queue_state_snapshot()
pending_total = int(queue_snapshot.get("pending_total", 0) or 0) pending_total = int(queue_snapshot.get("pending_total", 0) or 0)
running_total = int(queue_snapshot.get("running_total", 0) or 0) running_total = int(queue_snapshot.get("running_total", 0) or 0)
running_by_user = queue_snapshot.get("running_by_user") or {} running_by_user = queue_snapshot.get("running_by_user") or {}
positions = queue_snapshot.get("positions") or {} positions = queue_snapshot.get("positions") or {}
active_account_ids = set()
status_items = safe_iter_task_status_items() status_items = safe_iter_task_status_items()
for account_id, status_info in status_items: for account_id, status_info in status_items:
@@ -95,15 +39,11 @@ def status_push_worker() -> None:
user_id = status_info.get("user_id") user_id = status_info.get("user_id")
if not user_id: if not user_id:
continue continue
active_account_ids.add(str(account_id))
account = safe_get_account(user_id, account_id) account = safe_get_account(user_id, account_id)
if not account: if not account:
continue continue
user_id_int = _to_int(user_id)
account_data = account.to_dict() account_data = account.to_dict()
pos = positions.get(account_id) or positions.get(str(account_id)) or {} pos = positions.get(account_id) or {}
account_data.update( account_data.update(
{ {
"queue_pending_total": pending_total, "queue_pending_total": pending_total,
@@ -111,23 +51,10 @@ def status_push_worker() -> None:
"queue_ahead": pos.get("queue_ahead"), "queue_ahead": pos.get("queue_ahead"),
"queue_position": pos.get("queue_position"), "queue_position": pos.get("queue_position"),
"queue_is_vip": pos.get("is_vip"), "queue_is_vip": pos.get("is_vip"),
"queue_running_user": _to_int(running_by_user.get(user_id_int, running_by_user.get(str(user_id_int), 0))), "queue_running_user": int(running_by_user.get(int(user_id), 0) or 0),
} }
) )
socketio.emit("account_update", account_data, room=f"user_{user_id}")
cache_entry = emitted_state.setdefault(str(account_id), {})
account_sig = _payload_signature(account_data)
if _should_emit(
last_sig=cache_entry.get("account_sig"),
last_ts=float(cache_entry.get("account_ts", 0) or 0),
new_sig=account_sig,
now_ts=now_ts,
min_interval=queue_min_interval,
force_interval=force_interval,
):
socketio.emit("account_update", account_data, room=f"user_{user_id}")
cache_entry["account_sig"] = account_sig
cache_entry["account_ts"] = now_ts
if status != "运行中": if status != "运行中":
continue continue
@@ -147,26 +74,9 @@ def status_push_worker() -> None:
"queue_running_total": running_total, "queue_running_total": running_total,
"queue_ahead": pos.get("queue_ahead"), "queue_ahead": pos.get("queue_ahead"),
"queue_position": pos.get("queue_position"), "queue_position": pos.get("queue_position"),
"queue_running_user": _to_int(running_by_user.get(user_id_int, running_by_user.get(str(user_id_int), 0))), "queue_running_user": int(running_by_user.get(int(user_id), 0) or 0),
} }
socketio.emit("task_progress", progress_data, room=f"user_{user_id}")
progress_sig = _payload_signature(progress_data)
if _should_emit(
last_sig=cache_entry.get("progress_sig"),
last_ts=float(cache_entry.get("progress_ts", 0) or 0),
new_sig=progress_sig,
now_ts=now_ts,
min_interval=progress_min_interval,
force_interval=force_interval,
):
socketio.emit("task_progress", progress_data, room=f"user_{user_id}")
cache_entry["progress_sig"] = progress_sig
cache_entry["progress_ts"] = now_ts
if emitted_state:
stale_ids = [account_id for account_id in emitted_state.keys() if account_id not in active_account_ids]
for account_id in stale_ids:
emitted_state.pop(account_id, None)
time.sleep(push_interval) time.sleep(push_interval)
except Exception as e: except Exception as e:

View File

@@ -6,11 +6,9 @@ schedule==1.2.0
psutil==5.9.6 psutil==5.9.6
pytz==2024.1 pytz==2024.1
bcrypt==4.0.1 bcrypt==4.0.1
requests==2.32.3 requests==2.31.0
python-dotenv==1.0.0 python-dotenv==1.0.0
beautifulsoup4==4.12.2 beautifulsoup4==4.12.2
cryptography>=41.0.0 cryptography>=41.0.0
webauthn>=2.7.1
Pillow>=10.0.0 Pillow>=10.0.0
playwright==1.42.0 playwright==1.42.0
eventlet==0.36.1

View File

@@ -8,15 +8,6 @@ admin_api_bp = Blueprint("admin_api", __name__, url_prefix="/yuyx/api")
# Import side effects: register routes on blueprint # Import side effects: register routes on blueprint
from routes.admin_api import core as _core # noqa: F401 from routes.admin_api import core as _core # noqa: F401
from routes.admin_api import system_config_api as _system_config_api # noqa: F401
from routes.admin_api import operations_api as _operations_api # noqa: F401
from routes.admin_api import announcements_api as _announcements_api # noqa: F401
from routes.admin_api import users_api as _users_api # noqa: F401
from routes.admin_api import account_api as _account_api # noqa: F401
from routes.admin_api import feedback_api as _feedback_api # noqa: F401
from routes.admin_api import infra_api as _infra_api # noqa: F401
from routes.admin_api import tasks_api as _tasks_api # noqa: F401
from routes.admin_api import email_api as _email_api # noqa: F401
# Export security blueprint for app registration # Export security blueprint for app registration
from routes.admin_api.security import security_bp # noqa: F401 from routes.admin_api.security import security_bp # noqa: F401

View File

@@ -1,83 +0,0 @@
#!/usr/bin/env python3
# -*- coding: utf-8 -*-
from __future__ import annotations
import database
from app_security import validate_password
from flask import jsonify, request, session
from routes.admin_api import admin_api_bp
from routes.decorators import admin_required
# ==================== 密码重置 / 反馈(管理员) ====================
@admin_api_bp.route("/admin/password", methods=["PUT"])
@admin_required
def update_admin_password():
"""修改管理员密码(要求提供当前密码并校验新密码强度)"""
data = request.json or {}
current_password = (data.get("current_password") or "").strip()
new_password = (data.get("new_password") or "").strip()
if not current_password:
return jsonify({"error": "当前密码不能为空"}), 400
if not new_password:
return jsonify({"error": "新密码不能为空"}), 400
if current_password == new_password:
return jsonify({"error": "新密码不能与当前密码相同"}), 400
is_valid, error_msg = validate_password(new_password)
if not is_valid:
return jsonify({"error": error_msg}), 400
username = session.get("admin_username")
if not username:
return jsonify({"error": "未登录"}), 401
admin = database.verify_admin(username, current_password)
if not admin:
return jsonify({"error": "当前密码错误"}), 401
if database.update_admin_password(username, new_password):
session["admin_reauth_until"] = 0
session.modified = True
return jsonify({"success": True})
return jsonify({"error": "修改失败"}), 400
@admin_api_bp.route("/admin/username", methods=["PUT"])
@admin_required
def update_admin_username():
"""修改管理员用户名"""
data = request.json or {}
new_username = (data.get("new_username") or "").strip()
if not new_username:
return jsonify({"error": "用户名不能为空"}), 400
old_username = session.get("admin_username")
if database.update_admin_username(old_username, new_username):
session["admin_username"] = new_username
return jsonify({"success": True})
return jsonify({"error": "修改失败,用户名可能已存在"}), 400
@admin_api_bp.route("/users/<int:user_id>/reset_password", methods=["POST"])
@admin_required
def admin_reset_password_route(user_id):
"""管理员直接重置用户密码(无需审核)"""
data = request.json or {}
new_password = (data.get("new_password") or "").strip()
if not new_password:
return jsonify({"error": "新密码不能为空"}), 400
is_valid, error_msg = validate_password(new_password)
if not is_valid:
return jsonify({"error": error_msg}), 400
if database.admin_reset_user_password(user_id, new_password):
return jsonify({"message": "密码重置成功"})
return jsonify({"error": "重置失败,用户不存在"}), 400

View File

@@ -1,144 +0,0 @@
#!/usr/bin/env python3
# -*- coding: utf-8 -*-
from __future__ import annotations
import os
import posixpath
import secrets
import time
import database
from app_config import get_config
from app_logger import get_logger
from app_security import is_safe_path, sanitize_filename
from flask import current_app, jsonify, request, url_for
from routes.admin_api import admin_api_bp
from routes.decorators import admin_required
logger = get_logger("app")
config = get_config()
def _get_upload_dir():
rel_dir = getattr(config, "ANNOUNCEMENT_IMAGE_DIR", "static/announcements")
if not is_safe_path(current_app.root_path, rel_dir):
rel_dir = "static/announcements"
abs_dir = os.path.join(current_app.root_path, rel_dir)
os.makedirs(abs_dir, exist_ok=True)
return abs_dir, rel_dir
def _get_file_size(file_storage):
try:
file_storage.stream.seek(0, os.SEEK_END)
size = file_storage.stream.tell()
file_storage.stream.seek(0)
return size
except Exception:
return None
# ==================== 公告管理API管理员 ====================
@admin_api_bp.route("/announcements/upload_image", methods=["POST"])
@admin_required
def admin_upload_announcement_image():
"""上传公告图片返回可访问URL"""
file = request.files.get("file")
if not file or not file.filename:
return jsonify({"error": "请选择图片"}), 400
filename = sanitize_filename(file.filename)
ext = os.path.splitext(filename)[1].lower()
allowed_exts = getattr(config, "ALLOWED_ANNOUNCEMENT_IMAGE_EXTENSIONS", {".png", ".jpg", ".jpeg"})
if not ext or ext not in allowed_exts:
return jsonify({"error": "不支持的图片格式"}), 400
if file.mimetype and not str(file.mimetype).startswith("image/"):
return jsonify({"error": "文件类型无效"}), 400
size = _get_file_size(file)
max_size = int(getattr(config, "MAX_ANNOUNCEMENT_IMAGE_SIZE", 5 * 1024 * 1024))
if size is not None and size > max_size:
max_mb = max_size // 1024 // 1024
return jsonify({"error": f"图片大小不能超过{max_mb}MB"}), 400
abs_dir, rel_dir = _get_upload_dir()
token = secrets.token_hex(6)
name = f"announcement_{int(time.time())}_{token}{ext}"
save_path = os.path.join(abs_dir, name)
file.save(save_path)
static_root = os.path.join(current_app.root_path, "static")
rel_to_static = os.path.relpath(abs_dir, static_root)
if rel_to_static.startswith(".."):
rel_to_static = "announcements"
url_path = posixpath.join(rel_to_static.replace(os.sep, "/"), name)
return jsonify({"success": True, "url": url_for("serve_static", filename=url_path)})
@admin_api_bp.route("/announcements", methods=["GET"])
@admin_required
def admin_get_announcements():
"""获取公告列表"""
try:
limit = int(request.args.get("limit", 50))
offset = int(request.args.get("offset", 0))
except (TypeError, ValueError):
limit, offset = 50, 0
limit = max(1, min(200, limit))
offset = max(0, offset)
return jsonify(database.get_announcements(limit=limit, offset=offset))
@admin_api_bp.route("/announcements", methods=["POST"])
@admin_required
def admin_create_announcement():
"""创建公告(默认启用并替换旧公告)"""
data = request.json or {}
title = (data.get("title") or "").strip()
content = (data.get("content") or "").strip()
image_url = (data.get("image_url") or "").strip()
is_active = bool(data.get("is_active", True))
if image_url and len(image_url) > 1000:
return jsonify({"error": "图片地址过长"}), 400
announcement_id = database.create_announcement(title, content, image_url=image_url, is_active=is_active)
if not announcement_id:
return jsonify({"error": "标题和内容不能为空"}), 400
return jsonify({"success": True, "id": announcement_id})
@admin_api_bp.route("/announcements/<int:announcement_id>/activate", methods=["POST"])
@admin_required
def admin_activate_announcement(announcement_id):
"""启用公告(会自动停用其他公告)"""
if not database.get_announcement_by_id(announcement_id):
return jsonify({"error": "公告不存在"}), 404
ok = database.set_announcement_active(announcement_id, True)
return jsonify({"success": ok})
@admin_api_bp.route("/announcements/<int:announcement_id>/deactivate", methods=["POST"])
@admin_required
def admin_deactivate_announcement(announcement_id):
"""停用公告"""
if not database.get_announcement_by_id(announcement_id):
return jsonify({"error": "公告不存在"}), 404
ok = database.set_announcement_active(announcement_id, False)
return jsonify({"success": ok})
@admin_api_bp.route("/announcements/<int:announcement_id>", methods=["DELETE"])
@admin_required
def admin_delete_announcement(announcement_id):
"""删除公告"""
if not database.get_announcement_by_id(announcement_id):
return jsonify({"error": "公告不存在"}), 404
ok = database.delete_announcement(announcement_id)
return jsonify({"success": ok})

File diff suppressed because it is too large Load Diff

View File

@@ -1,214 +0,0 @@
#!/usr/bin/env python3
# -*- coding: utf-8 -*-
from __future__ import annotations
import email_service
from app_logger import get_logger
from app_security import validate_email
from flask import jsonify, request
from routes.admin_api import admin_api_bp
from routes.decorators import admin_required
logger = get_logger("app")
@admin_api_bp.route("/email/settings", methods=["GET"])
@admin_required
def get_email_settings_api():
"""获取全局邮件设置"""
try:
settings = email_service.get_email_settings()
return jsonify(settings)
except Exception as e:
logger.error(f"获取邮件设置失败: {e}")
return jsonify({"error": "获取邮件设置失败"}), 500
@admin_api_bp.route("/email/settings", methods=["POST"])
@admin_required
def update_email_settings_api():
"""更新全局邮件设置"""
try:
data = request.json or {}
enabled = data.get("enabled", False)
failover_enabled = data.get("failover_enabled", True)
register_verify_enabled = data.get("register_verify_enabled")
login_alert_enabled = data.get("login_alert_enabled")
base_url = data.get("base_url")
task_notify_enabled = data.get("task_notify_enabled")
email_service.update_email_settings(
enabled=enabled,
failover_enabled=failover_enabled,
register_verify_enabled=register_verify_enabled,
login_alert_enabled=login_alert_enabled,
base_url=base_url,
task_notify_enabled=task_notify_enabled,
)
return jsonify({"success": True})
except Exception as e:
logger.error(f"更新邮件设置失败: {e}")
return jsonify({"error": "更新邮件设置失败"}), 500
@admin_api_bp.route("/smtp/configs", methods=["GET"])
@admin_required
def get_smtp_configs_api():
"""获取所有SMTP配置列表"""
try:
configs = email_service.get_smtp_configs(include_password=False)
return jsonify(configs)
except Exception as e:
logger.error(f"获取SMTP配置失败: {e}")
return jsonify({"error": "获取SMTP配置失败"}), 500
@admin_api_bp.route("/smtp/configs", methods=["POST"])
@admin_required
def create_smtp_config_api():
"""创建SMTP配置"""
try:
data = request.json or {}
if not data.get("host"):
return jsonify({"error": "SMTP服务器地址不能为空"}), 400
if not data.get("username"):
return jsonify({"error": "SMTP用户名不能为空"}), 400
config_id = email_service.create_smtp_config(data)
return jsonify({"success": True, "id": config_id})
except Exception as e:
logger.error(f"创建SMTP配置失败: {e}")
return jsonify({"error": "创建SMTP配置失败"}), 500
@admin_api_bp.route("/smtp/configs/<int:config_id>", methods=["GET"])
@admin_required
def get_smtp_config_api(config_id):
"""获取单个SMTP配置详情"""
try:
config_data = email_service.get_smtp_config(config_id, include_password=False)
if not config_data:
return jsonify({"error": "配置不存在"}), 404
return jsonify(config_data)
except Exception as e:
logger.error(f"获取SMTP配置失败: {e}")
return jsonify({"error": "获取SMTP配置失败"}), 500
@admin_api_bp.route("/smtp/configs/<int:config_id>", methods=["PUT"])
@admin_required
def update_smtp_config_api(config_id):
"""更新SMTP配置"""
try:
data = request.json or {}
if email_service.update_smtp_config(config_id, data):
return jsonify({"success": True})
return jsonify({"error": "更新失败"}), 400
except Exception as e:
logger.error(f"更新SMTP配置失败: {e}")
return jsonify({"error": "更新SMTP配置失败"}), 500
@admin_api_bp.route("/smtp/configs/<int:config_id>", methods=["DELETE"])
@admin_required
def delete_smtp_config_api(config_id):
"""删除SMTP配置"""
try:
if email_service.delete_smtp_config(config_id):
return jsonify({"success": True})
return jsonify({"error": "删除失败"}), 400
except Exception as e:
logger.error(f"删除SMTP配置失败: {e}")
return jsonify({"error": "删除SMTP配置失败"}), 500
@admin_api_bp.route("/smtp/configs/<int:config_id>/test", methods=["POST"])
@admin_required
def test_smtp_config_api(config_id):
"""测试SMTP配置"""
try:
data = request.json or {}
test_email = str(data.get("email", "") or "").strip()
if not test_email:
return jsonify({"error": "请提供测试邮箱"}), 400
is_valid, error_msg = validate_email(test_email)
if not is_valid:
return jsonify({"error": error_msg}), 400
result = email_service.test_smtp_config(config_id, test_email)
return jsonify(result)
except Exception as e:
logger.error(f"测试SMTP配置失败: {e}")
return jsonify({"success": False, "error": "测试SMTP配置失败"}), 500
@admin_api_bp.route("/smtp/configs/<int:config_id>/primary", methods=["POST"])
@admin_required
def set_primary_smtp_config_api(config_id):
"""设置主SMTP配置"""
try:
if email_service.set_primary_smtp_config(config_id):
return jsonify({"success": True})
return jsonify({"error": "设置失败"}), 400
except Exception as e:
logger.error(f"设置主SMTP配置失败: {e}")
return jsonify({"error": "设置主SMTP配置失败"}), 500
@admin_api_bp.route("/smtp/configs/primary/clear", methods=["POST"])
@admin_required
def clear_primary_smtp_config_api():
"""取消主SMTP配置"""
try:
email_service.clear_primary_smtp_config()
return jsonify({"success": True})
except Exception as e:
logger.error(f"取消主SMTP配置失败: {e}")
return jsonify({"error": "取消主SMTP配置失败"}), 500
@admin_api_bp.route("/email/stats", methods=["GET"])
@admin_required
def get_email_stats_api():
"""获取邮件发送统计"""
try:
stats = email_service.get_email_stats()
return jsonify(stats)
except Exception as e:
logger.error(f"获取邮件统计失败: {e}")
return jsonify({"error": "获取邮件统计失败"}), 500
@admin_api_bp.route("/email/logs", methods=["GET"])
@admin_required
def get_email_logs_api():
"""获取邮件发送日志"""
try:
page = request.args.get("page", 1, type=int)
page_size = request.args.get("page_size", 20, type=int)
email_type = request.args.get("type", None)
status = request.args.get("status", None)
page_size = min(max(page_size, 10), 100)
result = email_service.get_email_logs(page, page_size, email_type, status)
return jsonify(result)
except Exception as e:
logger.error(f"获取邮件日志失败: {e}")
return jsonify({"error": "获取邮件日志失败"}), 500
@admin_api_bp.route("/email/logs/cleanup", methods=["POST"])
@admin_required
def cleanup_email_logs_api():
"""清理过期邮件日志"""
try:
data = request.json or {}
days = data.get("days", 30)
days = min(max(days, 7), 365)
deleted = email_service.cleanup_email_logs(days)
return jsonify({"success": True, "deleted": deleted})
except Exception as e:
logger.error(f"清理邮件日志失败: {e}")
return jsonify({"error": "清理邮件日志失败"}), 500

View File

@@ -1,58 +0,0 @@
#!/usr/bin/env python3
# -*- coding: utf-8 -*-
from __future__ import annotations
import database
from flask import jsonify, request
from routes.admin_api import admin_api_bp
from routes.decorators import admin_required
@admin_api_bp.route("/feedbacks", methods=["GET"])
@admin_required
def get_all_feedbacks():
"""管理员获取所有反馈"""
status = request.args.get("status")
try:
limit = int(request.args.get("limit", 100))
offset = int(request.args.get("offset", 0))
limit = min(max(1, limit), 1000)
offset = max(0, offset)
except (ValueError, TypeError):
return jsonify({"error": "无效的分页参数"}), 400
feedbacks = database.get_bug_feedbacks(limit=limit, offset=offset, status_filter=status)
stats = database.get_feedback_stats()
return jsonify({"feedbacks": feedbacks, "stats": stats})
@admin_api_bp.route("/feedbacks/<int:feedback_id>/reply", methods=["POST"])
@admin_required
def reply_to_feedback(feedback_id):
"""管理员回复反馈"""
data = request.get_json() or {}
reply = (data.get("reply") or "").strip()
if not reply:
return jsonify({"error": "回复内容不能为空"}), 400
if database.reply_feedback(feedback_id, reply):
return jsonify({"message": "回复成功"})
return jsonify({"error": "反馈不存在"}), 404
@admin_api_bp.route("/feedbacks/<int:feedback_id>/close", methods=["POST"])
@admin_required
def close_feedback_api(feedback_id):
"""管理员关闭反馈"""
if database.close_feedback(feedback_id):
return jsonify({"message": "已关闭"})
return jsonify({"error": "反馈不存在"}), 404
@admin_api_bp.route("/feedbacks/<int:feedback_id>", methods=["DELETE"])
@admin_required
def delete_feedback_api(feedback_id):
"""管理员删除反馈"""
if database.delete_feedback(feedback_id):
return jsonify({"message": "已删除"})
return jsonify({"error": "反馈不存在"}), 404

View File

@@ -1,353 +0,0 @@
#!/usr/bin/env python3
# -*- coding: utf-8 -*-
from __future__ import annotations
import os
import socket
import threading
import time
from datetime import datetime
import database
from app_logger import get_logger
from flask import jsonify, session
from routes.admin_api import admin_api_bp
from routes.decorators import admin_required
from services.request_metrics import get_request_metrics_snapshot
from services.slow_sql_metrics import get_slow_sql_metrics_snapshot
from services.time_utils import BEIJING_TZ, get_beijing_now
logger = get_logger("app")
_ADMIN_STATS_CACHE_TTL = max(1.0, float(os.environ.get("ADMIN_STATS_CACHE_TTL_SECONDS", "5")))
_admin_stats_cache: dict[str, object] = {"expires_at_monotonic": 0.0, "data": None}
_admin_stats_cache_lock = threading.Lock()
_DOCKER_STATS_CACHE_TTL = max(2.0, float(os.environ.get("ADMIN_DOCKER_STATS_CACHE_TTL_SECONDS", "5")))
_docker_stats_cache: dict[str, object] = {"expires_at_monotonic": 0.0, "data": None}
_docker_stats_cache_lock = threading.Lock()
_REQUEST_METRICS_CACHE_TTL = max(1.0, float(os.environ.get("ADMIN_REQUEST_METRICS_CACHE_TTL_SECONDS", "3")))
_request_metrics_cache: dict[str, object] = {"expires_at_monotonic": 0.0, "data": None}
_request_metrics_cache_lock = threading.Lock()
_SLOW_SQL_METRICS_CACHE_TTL = max(1.0, float(os.environ.get("ADMIN_SLOW_SQL_METRICS_CACHE_TTL_SECONDS", "3")))
_slow_sql_metrics_cache: dict[str, object] = {"expires_at_monotonic": 0.0, "data": None}
_slow_sql_metrics_cache_lock = threading.Lock()
def _get_system_stats_cached() -> dict:
now = time.monotonic()
with _admin_stats_cache_lock:
expires_at = float(_admin_stats_cache.get("expires_at_monotonic") or 0.0)
cached_data = _admin_stats_cache.get("data")
if isinstance(cached_data, dict) and now < expires_at:
return dict(cached_data)
fresh_data = database.get_system_stats() or {}
with _admin_stats_cache_lock:
_admin_stats_cache["data"] = dict(fresh_data)
_admin_stats_cache["expires_at_monotonic"] = now + _ADMIN_STATS_CACHE_TTL
return dict(fresh_data)
def _get_request_metrics_cached() -> dict:
now = time.monotonic()
with _request_metrics_cache_lock:
expires_at = float(_request_metrics_cache.get("expires_at_monotonic") or 0.0)
cached_data = _request_metrics_cache.get("data")
if isinstance(cached_data, dict) and now < expires_at:
return dict(cached_data)
fresh_data = get_request_metrics_snapshot() or {}
with _request_metrics_cache_lock:
_request_metrics_cache["data"] = dict(fresh_data)
_request_metrics_cache["expires_at_monotonic"] = now + _REQUEST_METRICS_CACHE_TTL
return dict(fresh_data)
def _get_slow_sql_metrics_cached() -> dict:
now = time.monotonic()
with _slow_sql_metrics_cache_lock:
expires_at = float(_slow_sql_metrics_cache.get("expires_at_monotonic") or 0.0)
cached_data = _slow_sql_metrics_cache.get("data")
if isinstance(cached_data, dict) and now < expires_at:
return dict(cached_data)
fresh_data = get_slow_sql_metrics_snapshot() or {}
with _slow_sql_metrics_cache_lock:
_slow_sql_metrics_cache["data"] = dict(fresh_data)
_slow_sql_metrics_cache["expires_at_monotonic"] = now + _SLOW_SQL_METRICS_CACHE_TTL
return dict(fresh_data)
@admin_api_bp.route("/stats", methods=["GET"])
@admin_required
def get_system_stats():
"""获取系统统计"""
stats = _get_system_stats_cached()
stats["admin_username"] = session.get("admin_username", "admin")
return jsonify(stats)
@admin_api_bp.route("/request_metrics", methods=["GET"])
@admin_required
def get_request_metrics():
"""获取请求级监控指标"""
try:
metrics = _get_request_metrics_cached()
return jsonify(metrics)
except Exception as e:
logger.exception(f"获取请求级监控指标失败: {e}")
return jsonify({"error": "获取请求级监控指标失败"}), 500
@admin_api_bp.route("/slow_sql_metrics", methods=["GET"])
@admin_required
def get_slow_sql_metrics():
"""获取慢 SQL 监控指标"""
try:
metrics = _get_slow_sql_metrics_cached()
return jsonify(metrics)
except Exception as e:
logger.exception(f"获取慢 SQL 监控指标失败: {e}")
return jsonify({"error": "获取慢 SQL 监控指标失败"}), 500
@admin_api_bp.route("/browser_pool/stats", methods=["GET"])
@admin_required
def get_browser_pool_stats():
"""获取截图线程池状态"""
try:
from browser_pool_worker import get_browser_worker_pool
pool = get_browser_worker_pool()
stats = pool.get_stats() or {}
worker_details = []
for worker in stats.get("workers") or []:
last_ts = float(worker.get("last_active_ts") or 0)
last_active_at = None
if last_ts > 0:
try:
last_active_at = datetime.fromtimestamp(last_ts, tz=BEIJING_TZ).strftime("%Y-%m-%d %H:%M:%S")
except Exception:
last_active_at = None
created_ts = worker.get("browser_created_at")
created_at = None
if created_ts:
try:
created_at = datetime.fromtimestamp(float(created_ts), tz=BEIJING_TZ).strftime("%Y-%m-%d %H:%M:%S")
except Exception:
created_at = None
worker_details.append(
{
"worker_id": worker.get("worker_id"),
"idle": bool(worker.get("idle")),
"has_browser": bool(worker.get("has_browser")),
"total_tasks": int(worker.get("total_tasks") or 0),
"failed_tasks": int(worker.get("failed_tasks") or 0),
"browser_use_count": int(worker.get("browser_use_count") or 0),
"browser_created_at": created_at,
"browser_created_ts": created_ts,
"last_active_at": last_active_at,
"last_active_ts": last_ts,
"thread_alive": bool(worker.get("thread_alive")),
}
)
total_workers = len(worker_details) if worker_details else int(stats.get("pool_size") or 0)
return jsonify(
{
"total_workers": total_workers,
"active_workers": int(stats.get("busy_workers") or 0),
"idle_workers": int(stats.get("idle_workers") or 0),
"queue_size": int(stats.get("queue_size") or 0),
"workers": worker_details,
"summary": {
"total_tasks": int(stats.get("total_tasks") or 0),
"failed_tasks": int(stats.get("failed_tasks") or 0),
"success_rate": stats.get("success_rate"),
},
"server_time_cst": get_beijing_now().strftime("%Y-%m-%d %H:%M:%S"),
}
)
except Exception as e:
logger.exception(f"[AdminAPI] 获取截图线程池状态失败: {e}")
return jsonify({"error": "获取截图线程池状态失败"}), 500
def _format_duration(seconds: int) -> str:
total = max(0, int(seconds or 0))
days = total // 86400
hours = (total % 86400) // 3600
minutes = (total % 3600) // 60
if days > 0:
return f"{days}{hours}小时{minutes}分钟"
if hours > 0:
return f"{hours}小时{minutes}分钟"
return f"{minutes}分钟"
def _fill_host_service_stats(docker_status: dict) -> None:
import psutil
process = psutil.Process(os.getpid())
memory_info = process.memory_info()
virtual_memory = psutil.virtual_memory()
rss_bytes = float(memory_info.rss or 0)
total_bytes = float(virtual_memory.total or 0)
memory_percent = (rss_bytes / total_bytes * 100.0) if total_bytes > 0 else 0.0
docker_status.update(
{
"running": True,
"status": "Host Service",
"container_name": f"host:{socket.gethostname()}",
"uptime": _format_duration(int(time.time() - float(process.create_time() or time.time()))),
"memory_usage": f"{rss_bytes / 1024 / 1024:.2f} MB",
"memory_limit": f"{total_bytes / 1024 / 1024 / 1024:.2f} GB" if total_bytes > 0 else "N/A",
"memory_percent": f"{memory_percent:.2f}%",
"cpu_percent": f"{max(0.0, float(process.cpu_percent(interval=0.1))):.2f}%",
}
)
@admin_api_bp.route("/docker_stats", methods=["GET"])
@admin_required
def get_docker_stats():
"""获取容器运行状态(非容器部署时返回当前服务进程状态)"""
now = time.monotonic()
with _docker_stats_cache_lock:
expires_at = float(_docker_stats_cache.get("expires_at_monotonic") or 0.0)
cached_data = _docker_stats_cache.get("data")
if isinstance(cached_data, dict) and now < expires_at:
return jsonify(dict(cached_data))
docker_status = {
"running": False,
"container_name": "N/A",
"uptime": "N/A",
"memory_usage": "N/A",
"memory_limit": "N/A",
"memory_percent": "N/A",
"cpu_percent": "N/A",
"status": "Unknown",
}
try:
if os.path.exists("/.dockerenv"):
docker_status["running"] = True
try:
with open("/etc/hostname", "r", encoding="utf-8") as f:
docker_status["container_name"] = f.read().strip() or "N/A"
except Exception as e:
logger.debug(f"读取容器名称失败: {e}")
try:
if os.path.exists("/sys/fs/cgroup/memory.current"):
with open("/sys/fs/cgroup/memory.current", "r", encoding="utf-8") as f:
mem_total = int(f.read().strip())
cache = 0
if os.path.exists("/sys/fs/cgroup/memory.stat"):
with open("/sys/fs/cgroup/memory.stat", "r", encoding="utf-8") as f:
for line in f:
if line.startswith("inactive_file "):
cache = int(line.split()[1])
break
mem_bytes = max(0, mem_total - cache)
docker_status["memory_usage"] = f"{mem_bytes / 1024 / 1024:.2f} MB"
if os.path.exists("/sys/fs/cgroup/memory.max"):
with open("/sys/fs/cgroup/memory.max", "r", encoding="utf-8") as f:
limit_str = f.read().strip()
if limit_str != "max":
limit_bytes = int(limit_str)
if limit_bytes > 0:
docker_status["memory_limit"] = f"{limit_bytes / 1024 / 1024 / 1024:.2f} GB"
docker_status["memory_percent"] = f"{mem_bytes / limit_bytes * 100:.2f}%"
elif os.path.exists("/sys/fs/cgroup/memory/memory.usage_in_bytes"):
with open("/sys/fs/cgroup/memory/memory.usage_in_bytes", "r", encoding="utf-8") as f:
mem_bytes = int(f.read().strip())
docker_status["memory_usage"] = f"{mem_bytes / 1024 / 1024:.2f} MB"
with open("/sys/fs/cgroup/memory/memory.limit_in_bytes", "r", encoding="utf-8") as f:
limit_bytes = int(f.read().strip())
if 0 < limit_bytes < 1e18:
docker_status["memory_limit"] = f"{limit_bytes / 1024 / 1024 / 1024:.2f} GB"
docker_status["memory_percent"] = f"{mem_bytes / limit_bytes * 100:.2f}%"
except Exception as e:
logger.debug(f"读取容器内存信息失败: {e}")
try:
if os.path.exists("/sys/fs/cgroup/cpu.stat"):
usage1 = 0
with open("/sys/fs/cgroup/cpu.stat", "r", encoding="utf-8") as f:
for line in f:
if line.startswith("usage_usec"):
usage1 = int(line.split()[1])
break
time.sleep(0.1)
usage2 = 0
with open("/sys/fs/cgroup/cpu.stat", "r", encoding="utf-8") as f:
for line in f:
if line.startswith("usage_usec"):
usage2 = int(line.split()[1])
break
cpu_percent = (usage2 - usage1) / 0.1 / 1e6 * 100
docker_status["cpu_percent"] = f"{max(0.0, cpu_percent):.2f}%"
elif os.path.exists("/sys/fs/cgroup/cpu/cpuacct.usage"):
with open("/sys/fs/cgroup/cpu/cpuacct.usage", "r", encoding="utf-8") as f:
usage1 = int(f.read().strip())
time.sleep(0.1)
with open("/sys/fs/cgroup/cpu/cpuacct.usage", "r", encoding="utf-8") as f:
usage2 = int(f.read().strip())
cpu_percent = (usage2 - usage1) / 0.1 / 1e9 * 100
docker_status["cpu_percent"] = f"{max(0.0, cpu_percent):.2f}%"
except Exception as e:
logger.debug(f"读取容器CPU信息失败: {e}")
try:
with open("/proc/uptime", "r", encoding="utf-8") as f:
system_uptime = float(f.read().split()[0])
with open("/proc/1/stat", "r", encoding="utf-8") as f:
stat = f.read().split()
starttime_jiffies = int(stat[21])
clk_tck = os.sysconf(os.sysconf_names["SC_CLK_TCK"])
uptime_seconds = int(system_uptime - (starttime_jiffies / clk_tck))
docker_status["uptime"] = _format_duration(uptime_seconds)
except Exception as e:
logger.debug(f"获取容器运行时间失败: {e}")
docker_status["status"] = "Running"
else:
_fill_host_service_stats(docker_status)
except Exception as e:
logger.exception(f"获取容器/服务状态失败: {e}")
docker_status["status"] = f"Error: {e}"
with _docker_stats_cache_lock:
_docker_stats_cache["data"] = dict(docker_status)
_docker_stats_cache["expires_at_monotonic"] = now + _DOCKER_STATS_CACHE_TTL
return jsonify(docker_status)

View File

@@ -1,244 +0,0 @@
#!/usr/bin/env python3
# -*- coding: utf-8 -*-
from __future__ import annotations
import threading
import time
from datetime import datetime
import database
import requests
from app_logger import get_logger
from app_security import is_safe_outbound_url
from flask import jsonify, request
from routes.admin_api import admin_api_bp
from routes.decorators import admin_required
from services.scheduler import run_scheduled_task
from services.time_utils import BEIJING_TZ, get_beijing_now
logger = get_logger("app")
_server_cpu_percent_lock = threading.Lock()
_server_cpu_percent_last: float | None = None
_server_cpu_percent_last_ts = 0.0
def _get_server_cpu_percent() -> float:
import psutil
global _server_cpu_percent_last, _server_cpu_percent_last_ts
now = time.time()
with _server_cpu_percent_lock:
if _server_cpu_percent_last is not None and (now - _server_cpu_percent_last_ts) < 0.5:
return _server_cpu_percent_last
try:
if _server_cpu_percent_last is None:
cpu_percent = float(psutil.cpu_percent(interval=0.1))
else:
cpu_percent = float(psutil.cpu_percent(interval=None))
except Exception:
cpu_percent = float(_server_cpu_percent_last or 0.0)
if cpu_percent < 0:
cpu_percent = 0.0
_server_cpu_percent_last = cpu_percent
_server_cpu_percent_last_ts = now
return cpu_percent
@admin_api_bp.route("/kdocs/status", methods=["GET"])
@admin_required
def get_kdocs_status_api():
"""获取金山文档上传状态"""
try:
from services.kdocs_uploader import get_kdocs_uploader
uploader = get_kdocs_uploader()
status = uploader.get_status()
live = str(request.args.get("live", "")).lower() in ("1", "true", "yes")
# 仅在显式 live=1 时做实时状态校验,默认返回缓存状态,避免阻塞页面加载
should_live_check = live
if should_live_check:
live_status = uploader.refresh_login_status()
if live_status.get("success"):
logged_in = bool(live_status.get("logged_in"))
status["logged_in"] = logged_in
status["last_login_ok"] = logged_in
status["login_required"] = not logged_in
if live_status.get("error"):
status["last_error"] = live_status.get("error")
else:
status["logged_in"] = True if status.get("last_login_ok") else False if status.get("last_login_ok") is False else None
if status.get("last_login_ok") is True and status.get("last_error") == "操作超时":
status["last_error"] = None
return jsonify(status)
except Exception as e:
return jsonify({"error": f"获取状态失败: {e}"}), 500
@admin_api_bp.route("/kdocs/qr", methods=["POST"])
@admin_required
def get_kdocs_qr_api():
"""获取金山文档登录二维码"""
try:
from services.kdocs_uploader import get_kdocs_uploader
uploader = get_kdocs_uploader()
data = request.get_json(silent=True) or {}
force = bool(data.get("force"))
if not force:
force = str(request.args.get("force", "")).lower() in ("1", "true", "yes")
result = uploader.request_qr(force=force)
if not result.get("success"):
return jsonify({"error": result.get("error", "获取二维码失败")}), 400
return jsonify(result)
except Exception as e:
return jsonify({"error": f"获取二维码失败: {e}"}), 500
@admin_api_bp.route("/kdocs/clear-login", methods=["POST"])
@admin_required
def clear_kdocs_login_api():
"""清除金山文档登录态"""
try:
from services.kdocs_uploader import get_kdocs_uploader
uploader = get_kdocs_uploader()
result = uploader.clear_login()
if not result.get("success"):
return jsonify({"error": result.get("error", "清除失败")}), 400
return jsonify({"success": True})
except Exception as e:
return jsonify({"error": f"清除失败: {e}"}), 500
@admin_api_bp.route("/schedule/execute", methods=["POST"])
@admin_required
def execute_schedule_now():
"""立即执行定时任务(无视定时时间和星期限制)"""
try:
threading.Thread(target=run_scheduled_task, args=(True,), daemon=True).start()
logger.info("[立即执行定时任务] 管理员手动触发定时任务执行(跳过星期检查)")
return jsonify({"message": "定时任务已开始执行,请查看任务列表获取进度"})
except Exception as e:
logger.error(f"[立即执行定时任务] 启动失败: {str(e)}")
return jsonify({"error": f"启动失败: {str(e)}"}), 500
@admin_api_bp.route("/proxy/config", methods=["GET"])
@admin_required
def get_proxy_config_api():
"""获取代理配置"""
config_data = database.get_system_config()
return jsonify(
{
"proxy_enabled": config_data.get("proxy_enabled", 0),
"proxy_api_url": config_data.get("proxy_api_url", ""),
"proxy_expire_minutes": config_data.get("proxy_expire_minutes", 3),
}
)
@admin_api_bp.route("/proxy/config", methods=["POST"])
@admin_required
def update_proxy_config_api():
"""更新代理配置"""
data = request.json or {}
proxy_enabled = data.get("proxy_enabled")
proxy_api_url = (data.get("proxy_api_url", "") or "").strip()
proxy_expire_minutes = data.get("proxy_expire_minutes")
if proxy_enabled is not None and proxy_enabled not in [0, 1]:
return jsonify({"error": "proxy_enabled必须是0或1"}), 400
if proxy_expire_minutes is not None:
if not isinstance(proxy_expire_minutes, int) or proxy_expire_minutes < 1:
return jsonify({"error": "代理有效期必须是大于0的整数"}), 400
if database.update_system_config(
proxy_enabled=proxy_enabled,
proxy_api_url=proxy_api_url,
proxy_expire_minutes=proxy_expire_minutes,
):
return jsonify({"message": "代理配置已更新"})
return jsonify({"error": "更新失败"}), 400
@admin_api_bp.route("/proxy/test", methods=["POST"])
@admin_required
def test_proxy_api():
"""测试代理连接"""
data = request.json or {}
api_url = (data.get("api_url") or "").strip()
if not api_url:
return jsonify({"error": "请提供API地址"}), 400
if not is_safe_outbound_url(api_url):
return jsonify({"error": "API地址不可用或不安全"}), 400
try:
response = requests.get(api_url, timeout=10)
if response.status_code == 200:
ip_port = response.text.strip()
if ip_port and ":" in ip_port:
return jsonify({"success": True, "proxy": ip_port, "message": f"代理获取成功: {ip_port}"})
return jsonify({"success": False, "message": f"代理格式错误: {ip_port}"}), 400
return jsonify({"success": False, "message": f"HTTP错误: {response.status_code}"}), 400
except Exception as e:
return jsonify({"success": False, "message": f"连接失败: {str(e)}"}), 500
@admin_api_bp.route("/server/info", methods=["GET"])
@admin_required
def get_server_info_api():
"""获取服务器信息"""
import psutil
cpu_percent = _get_server_cpu_percent()
memory = psutil.virtual_memory()
memory_total = f"{memory.total / (1024**3):.1f}GB"
memory_used = f"{memory.used / (1024**3):.1f}GB"
memory_percent = memory.percent
disk = psutil.disk_usage("/")
disk_total = f"{disk.total / (1024**3):.1f}GB"
disk_used = f"{disk.used / (1024**3):.1f}GB"
disk_percent = disk.percent
try:
process = psutil.Process()
process_start_at = datetime.fromtimestamp(process.create_time(), tz=BEIJING_TZ)
uptime_delta = get_beijing_now() - process_start_at
except Exception:
boot_time = datetime.fromtimestamp(psutil.boot_time(), tz=BEIJING_TZ)
uptime_delta = get_beijing_now() - boot_time
uptime_seconds = max(0, int(uptime_delta.total_seconds()))
days = uptime_seconds // 86400
hours = (uptime_seconds % 86400) // 3600
minutes = (uptime_seconds % 3600) // 60
if days > 0:
uptime = f"{days}{hours}小时"
elif hours > 0:
uptime = f"{hours}小时{minutes}分钟"
else:
uptime = f"{minutes}分钟"
return jsonify(
{
"cpu_percent": cpu_percent,
"memory_total": memory_total,
"memory_used": memory_used,
"memory_percent": memory_percent,
"disk_total": disk_total,
"disk_used": disk_used,
"disk_percent": disk_percent,
"uptime": uptime,
}
)

View File

@@ -62,19 +62,6 @@ def _parse_bool(value: Any) -> bool:
return text in {"1", "true", "yes", "y", "on"} return text in {"1", "true", "yes", "y", "on"}
def _parse_int(value: Any, *, default: int | None = None, min_value: int | None = None) -> int | None:
try:
parsed = int(value)
except Exception:
parsed = default
if parsed is None:
return None
if min_value is not None:
parsed = max(int(min_value), parsed)
return parsed
def _sanitize_threat_event(event: dict) -> dict: def _sanitize_threat_event(event: dict) -> dict:
return { return {
"id": event.get("id"), "id": event.get("id"),
@@ -212,7 +199,10 @@ def ban_ip():
if not reason: if not reason:
return jsonify({"error": "reason不能为空"}), 400 return jsonify({"error": "reason不能为空"}), 400
duration_hours = _parse_int(duration_hours_raw, default=24, min_value=1) or 24 try:
duration_hours = max(1, int(duration_hours_raw))
except Exception:
duration_hours = 24
ok = blacklist.ban_ip(ip, reason, duration_hours=duration_hours, permanent=permanent) ok = blacklist.ban_ip(ip, reason, duration_hours=duration_hours, permanent=permanent)
if not ok: if not ok:
@@ -245,14 +235,20 @@ def ban_user():
duration_hours_raw = data.get("duration_hours", 24) duration_hours_raw = data.get("duration_hours", 24)
permanent = _parse_bool(data.get("permanent", False)) permanent = _parse_bool(data.get("permanent", False))
user_id = _parse_int(user_id_raw) try:
user_id = int(user_id_raw)
except Exception:
user_id = None
if user_id is None: if user_id is None:
return jsonify({"error": "user_id不能为空"}), 400 return jsonify({"error": "user_id不能为空"}), 400
if not reason: if not reason:
return jsonify({"error": "reason不能为空"}), 400 return jsonify({"error": "reason不能为空"}), 400
duration_hours = _parse_int(duration_hours_raw, default=24, min_value=1) or 24 try:
duration_hours = max(1, int(duration_hours_raw))
except Exception:
duration_hours = 24
ok = blacklist._ban_user_internal(user_id, reason=reason, duration_hours=duration_hours, permanent=permanent) ok = blacklist._ban_user_internal(user_id, reason=reason, duration_hours=duration_hours, permanent=permanent)
if not ok: if not ok:
@@ -266,7 +262,10 @@ def unban_user():
"""解除用户封禁""" """解除用户封禁"""
data = _parse_json() data = _parse_json()
user_id_raw = data.get("user_id") user_id_raw = data.get("user_id")
user_id = _parse_int(user_id_raw) try:
user_id = int(user_id_raw)
except Exception:
user_id = None
if user_id is None: if user_id is None:
return jsonify({"error": "user_id不能为空"}), 400 return jsonify({"error": "user_id不能为空"}), 400

View File

@@ -1,248 +0,0 @@
#!/usr/bin/env python3
# -*- coding: utf-8 -*-
from __future__ import annotations
import database
from app_logger import get_logger
from app_security import is_safe_outbound_url, validate_email
from flask import jsonify, request
from routes.admin_api import admin_api_bp
from routes.decorators import admin_required
from services.browse_types import BROWSE_TYPE_SHOULD_READ, validate_browse_type
from services.tasks import get_task_scheduler
logger = get_logger("app")
@admin_api_bp.route("/system/config", methods=["GET"])
@admin_required
def get_system_config_api():
"""获取系统配置"""
return jsonify(database.get_system_config())
@admin_api_bp.route("/system/config", methods=["POST"])
@admin_required
def update_system_config_api():
"""更新系统配置"""
data = request.json or {}
max_concurrent = data.get("max_concurrent_global")
schedule_enabled = data.get("schedule_enabled")
schedule_time = data.get("schedule_time")
schedule_browse_type = data.get("schedule_browse_type")
schedule_weekdays = data.get("schedule_weekdays")
new_max_concurrent_per_account = data.get("max_concurrent_per_account")
new_max_screenshot_concurrent = data.get("max_screenshot_concurrent")
db_slow_query_ms = data.get("db_slow_query_ms")
enable_screenshot = data.get("enable_screenshot")
auto_approve_enabled = data.get("auto_approve_enabled")
auto_approve_hourly_limit = data.get("auto_approve_hourly_limit")
auto_approve_vip_days = data.get("auto_approve_vip_days")
kdocs_enabled = data.get("kdocs_enabled")
kdocs_doc_url = data.get("kdocs_doc_url")
kdocs_default_unit = data.get("kdocs_default_unit")
kdocs_sheet_name = data.get("kdocs_sheet_name")
kdocs_sheet_index = data.get("kdocs_sheet_index")
kdocs_unit_column = data.get("kdocs_unit_column")
kdocs_image_column = data.get("kdocs_image_column")
kdocs_admin_notify_enabled = data.get("kdocs_admin_notify_enabled")
kdocs_admin_notify_email = data.get("kdocs_admin_notify_email")
kdocs_row_start = data.get("kdocs_row_start")
kdocs_row_end = data.get("kdocs_row_end")
if max_concurrent is not None:
if not isinstance(max_concurrent, int) or max_concurrent < 1:
return jsonify({"error": "全局并发数必须大于0建议小型服务器2-5中型5-10大型10-20"}), 400
if new_max_concurrent_per_account is not None:
if not isinstance(new_max_concurrent_per_account, int) or new_max_concurrent_per_account < 1:
return jsonify({"error": "单账号并发数必须大于0建议设为1避免同一用户任务相互影响"}), 400
if new_max_screenshot_concurrent is not None:
if not isinstance(new_max_screenshot_concurrent, int) or new_max_screenshot_concurrent < 1:
return jsonify({"error": "截图并发数必须大于0建议根据服务器配置设置wkhtmltoimage 资源占用较低)"}), 400
if db_slow_query_ms is not None:
try:
db_slow_query_ms = int(db_slow_query_ms)
except (ValueError, TypeError):
return jsonify({"error": "慢 SQL 阈值必须是数字(毫秒)"}), 400
if db_slow_query_ms < 0 or db_slow_query_ms > 60000:
return jsonify({"error": "慢 SQL 阈值范围应在 0-60000 毫秒之间"}), 400
if enable_screenshot is not None:
if isinstance(enable_screenshot, bool):
enable_screenshot = 1 if enable_screenshot else 0
if enable_screenshot not in (0, 1):
return jsonify({"error": "截图开关必须是0或1"}), 400
if schedule_time is not None:
import re
if not re.match(r"^([01]\d|2[0-3]):([0-5]\d)$", schedule_time):
return jsonify({"error": "时间格式错误,应为 HH:MM"}), 400
if schedule_browse_type is not None:
normalized = validate_browse_type(schedule_browse_type, default=BROWSE_TYPE_SHOULD_READ)
if not normalized:
return jsonify({"error": "浏览类型无效"}), 400
schedule_browse_type = normalized
if schedule_weekdays is not None:
try:
days = [int(d.strip()) for d in schedule_weekdays.split(",") if d.strip()]
if not all(1 <= d <= 7 for d in days):
return jsonify({"error": "星期数字必须在1-7之间"}), 400
except (ValueError, AttributeError):
return jsonify({"error": "星期格式错误"}), 400
if auto_approve_hourly_limit is not None:
if not isinstance(auto_approve_hourly_limit, int) or auto_approve_hourly_limit < 1:
return jsonify({"error": "每小时注册限制必须大于0"}), 400
if auto_approve_vip_days is not None:
if not isinstance(auto_approve_vip_days, int) or auto_approve_vip_days < 0:
return jsonify({"error": "注册赠送VIP天数不能为负数"}), 400
if kdocs_enabled is not None:
if isinstance(kdocs_enabled, bool):
kdocs_enabled = 1 if kdocs_enabled else 0
if kdocs_enabled not in (0, 1):
return jsonify({"error": "表格上传开关必须是0或1"}), 400
if kdocs_doc_url is not None:
kdocs_doc_url = str(kdocs_doc_url or "").strip()
if kdocs_doc_url and not is_safe_outbound_url(kdocs_doc_url):
return jsonify({"error": "文档链接格式不正确"}), 400
if kdocs_default_unit is not None:
kdocs_default_unit = str(kdocs_default_unit or "").strip()
if len(kdocs_default_unit) > 50:
return jsonify({"error": "默认县区长度不能超过50"}), 400
if kdocs_sheet_name is not None:
kdocs_sheet_name = str(kdocs_sheet_name or "").strip()
if len(kdocs_sheet_name) > 50:
return jsonify({"error": "Sheet名称长度不能超过50"}), 400
if kdocs_sheet_index is not None:
try:
kdocs_sheet_index = int(kdocs_sheet_index)
except Exception:
return jsonify({"error": "Sheet序号必须是数字"}), 400
if kdocs_sheet_index < 0:
return jsonify({"error": "Sheet序号不能为负数"}), 400
if kdocs_unit_column is not None:
kdocs_unit_column = str(kdocs_unit_column or "").strip().upper()
if not kdocs_unit_column:
return jsonify({"error": "县区列不能为空"}), 400
import re
if not re.match(r"^[A-Z]{1,3}$", kdocs_unit_column):
return jsonify({"error": "县区列格式错误"}), 400
if kdocs_image_column is not None:
kdocs_image_column = str(kdocs_image_column or "").strip().upper()
if not kdocs_image_column:
return jsonify({"error": "图片列不能为空"}), 400
import re
if not re.match(r"^[A-Z]{1,3}$", kdocs_image_column):
return jsonify({"error": "图片列格式错误"}), 400
if kdocs_admin_notify_enabled is not None:
if isinstance(kdocs_admin_notify_enabled, bool):
kdocs_admin_notify_enabled = 1 if kdocs_admin_notify_enabled else 0
if kdocs_admin_notify_enabled not in (0, 1):
return jsonify({"error": "管理员通知开关必须是0或1"}), 400
if kdocs_admin_notify_email is not None:
kdocs_admin_notify_email = str(kdocs_admin_notify_email or "").strip()
if kdocs_admin_notify_email:
is_valid, error_msg = validate_email(kdocs_admin_notify_email)
if not is_valid:
return jsonify({"error": error_msg}), 400
if kdocs_row_start is not None:
try:
kdocs_row_start = int(kdocs_row_start)
except (ValueError, TypeError):
return jsonify({"error": "起始行必须是数字"}), 400
if kdocs_row_start < 0:
return jsonify({"error": "起始行不能为负数"}), 400
if kdocs_row_end is not None:
try:
kdocs_row_end = int(kdocs_row_end)
except (ValueError, TypeError):
return jsonify({"error": "结束行必须是数字"}), 400
if kdocs_row_end < 0:
return jsonify({"error": "结束行不能为负数"}), 400
old_config = database.get_system_config() or {}
if not database.update_system_config(
max_concurrent=max_concurrent,
schedule_enabled=schedule_enabled,
schedule_time=schedule_time,
schedule_browse_type=schedule_browse_type,
schedule_weekdays=schedule_weekdays,
max_concurrent_per_account=new_max_concurrent_per_account,
max_screenshot_concurrent=new_max_screenshot_concurrent,
enable_screenshot=enable_screenshot,
auto_approve_enabled=auto_approve_enabled,
auto_approve_hourly_limit=auto_approve_hourly_limit,
auto_approve_vip_days=auto_approve_vip_days,
kdocs_enabled=kdocs_enabled,
kdocs_doc_url=kdocs_doc_url,
kdocs_default_unit=kdocs_default_unit,
kdocs_sheet_name=kdocs_sheet_name,
kdocs_sheet_index=kdocs_sheet_index,
kdocs_unit_column=kdocs_unit_column,
kdocs_image_column=kdocs_image_column,
kdocs_admin_notify_enabled=kdocs_admin_notify_enabled,
kdocs_admin_notify_email=kdocs_admin_notify_email,
kdocs_row_start=kdocs_row_start,
kdocs_row_end=kdocs_row_end,
db_slow_query_ms=db_slow_query_ms,
):
return jsonify({"error": "更新失败"}), 400
try:
new_config = database.get_system_config() or {}
scheduler = get_task_scheduler()
scheduler.update_limits(
max_global=int(new_config.get("max_concurrent_global", old_config.get("max_concurrent_global", 2))),
max_per_user=int(new_config.get("max_concurrent_per_account", old_config.get("max_concurrent_per_account", 1))),
)
try:
import db_pool
db_pool.configure_slow_query_runtime(threshold_ms=new_config.get("db_slow_query_ms"))
except Exception as slow_sql_error:
logger.warning(f"慢 SQL 运行时阈值更新失败: {slow_sql_error}")
if new_max_screenshot_concurrent is not None:
try:
from browser_pool_worker import resize_browser_worker_pool
if resize_browser_worker_pool(int(new_config.get("max_screenshot_concurrent", new_max_screenshot_concurrent))):
logger.info(f"截图线程池并发已更新为: {new_config.get('max_screenshot_concurrent')}")
except Exception as pool_error:
logger.warning(f"截图线程池并发更新失败: {pool_error}")
except Exception:
pass
if max_concurrent is not None and max_concurrent != old_config.get("max_concurrent_global"):
logger.info(f"全局并发数已更新为: {max_concurrent}")
if new_max_concurrent_per_account is not None and new_max_concurrent_per_account != old_config.get("max_concurrent_per_account"):
logger.info(f"单用户并发数已更新为: {new_max_concurrent_per_account}")
if new_max_screenshot_concurrent is not None:
logger.info(f"截图并发数已更新为: {new_max_screenshot_concurrent}")
if db_slow_query_ms is not None:
logger.info(f"慢 SQL 阈值已更新为: {db_slow_query_ms}ms")
return jsonify({"message": "系统配置已更新"})

View File

@@ -1,138 +0,0 @@
#!/usr/bin/env python3
# -*- coding: utf-8 -*-
from __future__ import annotations
import database
from app_logger import get_logger
from flask import jsonify, request
from routes.admin_api import admin_api_bp
from routes.decorators import admin_required
from services.state import safe_iter_task_status_items
from services.tasks import get_task_scheduler
logger = get_logger("app")
def _parse_page_int(name: str, default: int, *, minimum: int, maximum: int) -> int:
try:
value = int(request.args.get(name, default))
return max(minimum, min(value, maximum))
except (ValueError, TypeError):
return default
@admin_api_bp.route("/task/stats", methods=["GET"])
@admin_required
def get_task_stats_api():
"""获取任务统计数据"""
date_filter = request.args.get("date")
stats = database.get_task_stats(date_filter)
return jsonify(stats)
@admin_api_bp.route("/task/running", methods=["GET"])
@admin_required
def get_running_tasks_api():
"""获取当前运行中和排队中的任务"""
import time as time_mod
current_time = time_mod.time()
running = []
queuing = []
user_cache = {}
for account_id, info in safe_iter_task_status_items():
elapsed = int(current_time - info.get("start_time", current_time))
info_user_id = info.get("user_id")
if info_user_id not in user_cache:
user_cache[info_user_id] = database.get_user_by_id(info_user_id)
user = user_cache.get(info_user_id)
user_username = user["username"] if user else "N/A"
progress = info.get("progress", {"items": 0, "attachments": 0})
task_info = {
"account_id": account_id,
"user_id": info.get("user_id"),
"user_username": user_username,
"username": info.get("username"),
"browse_type": info.get("browse_type"),
"source": info.get("source", "manual"),
"detail_status": info.get("detail_status", "未知"),
"progress_items": progress.get("items", 0),
"progress_attachments": progress.get("attachments", 0),
"elapsed_seconds": elapsed,
"elapsed_display": f"{elapsed // 60}{elapsed % 60}" if elapsed >= 60 else f"{elapsed}",
}
if info.get("status") == "运行中":
running.append(task_info)
else:
queuing.append(task_info)
running.sort(key=lambda x: x["elapsed_seconds"], reverse=True)
queuing.sort(key=lambda x: x["elapsed_seconds"], reverse=True)
try:
max_concurrent = int(get_task_scheduler().max_global)
except Exception:
max_concurrent = int((database.get_system_config() or {}).get("max_concurrent_global", 2))
return jsonify(
{
"running": running,
"queuing": queuing,
"running_count": len(running),
"queuing_count": len(queuing),
"max_concurrent": max_concurrent,
}
)
@admin_api_bp.route("/task/logs", methods=["GET"])
@admin_required
def get_task_logs_api():
"""获取任务日志列表(支持分页和多种筛选)"""
limit = _parse_page_int("limit", 20, minimum=1, maximum=200)
offset = _parse_page_int("offset", 0, minimum=0, maximum=10**9)
date_filter = request.args.get("date")
status_filter = request.args.get("status")
source_filter = request.args.get("source")
user_id_filter = request.args.get("user_id")
account_filter = (request.args.get("account") or "").strip()
if user_id_filter:
try:
user_id_filter = int(user_id_filter)
except (ValueError, TypeError):
user_id_filter = None
try:
result = database.get_task_logs(
limit=limit,
offset=offset,
date_filter=date_filter,
status_filter=status_filter,
source_filter=source_filter,
user_id_filter=user_id_filter,
account_filter=account_filter if account_filter else None,
)
return jsonify(result)
except Exception as e:
logger.error(f"获取任务日志失败: {e}")
return jsonify({"logs": [], "total": 0, "error": "查询失败"})
@admin_api_bp.route("/task/logs/clear", methods=["POST"])
@admin_required
def clear_old_task_logs_api():
"""清理旧的任务日志"""
data = request.json or {}
days = data.get("days", 30)
if not isinstance(days, int) or days < 1:
return jsonify({"error": "天数必须是大于0的整数"}), 400
deleted_count = database.delete_old_task_logs(days)
return jsonify({"message": f"已删除{days}天前的{deleted_count}条日志"})

180
routes/admin_api/update.py Normal file
View File

@@ -0,0 +1,180 @@
#!/usr/bin/env python3
# -*- coding: utf-8 -*-
from __future__ import annotations
import os
import uuid
from flask import jsonify, request, session
from routes.admin_api import admin_api_bp
from routes.decorators import admin_required
from services.time_utils import get_beijing_now
from services.update_files import (
ensure_update_dirs,
get_update_job_log_path,
get_update_request_path,
get_update_result_path,
get_update_status_path,
load_json_file,
sanitize_job_id,
tail_text_file,
write_json_atomic,
)
def _request_ip() -> str:
try:
return request.headers.get("X-Forwarded-For", "").split(",")[0].strip() or request.remote_addr or ""
except Exception:
return ""
def _make_job_id(prefix: str = "upd") -> str:
now_str = get_beijing_now().strftime("%Y%m%d_%H%M%S")
rand = uuid.uuid4().hex[:8]
return f"{prefix}_{now_str}_{rand}"
def _has_pending_request() -> bool:
try:
return os.path.exists(get_update_request_path())
except Exception:
return False
def _parse_bool_field(data: dict, key: str) -> bool | None:
if not isinstance(data, dict) or key not in data:
return None
value = data.get(key)
if isinstance(value, bool):
return value
if isinstance(value, int):
if value in (0, 1):
return bool(value)
raise ValueError(f"{key} 必须是 0/1 或 true/false")
if isinstance(value, str):
text = value.strip().lower()
if text in ("1", "true", "yes", "y", "on"):
return True
if text in ("0", "false", "no", "n", "off", ""):
return False
raise ValueError(f"{key} 必须是 0/1 或 true/false")
if value is None:
return None
raise ValueError(f"{key} 必须是 0/1 或 true/false")
@admin_api_bp.route("/update/status", methods=["GET"])
@admin_required
def get_update_status_api():
"""读取宿主机 Update-Agent 写入的 update/status.json。"""
ensure_update_dirs()
status_path = get_update_status_path()
data, err = load_json_file(status_path)
if err:
return jsonify({"ok": False, "error": f"读取 status 失败: {err}", "data": data}), 200
if not data:
return jsonify({"ok": False, "error": "未发现更新状态Update-Agent 可能未运行)"}), 200
data.setdefault("update_available", False)
return jsonify({"ok": True, "data": data}), 200
@admin_api_bp.route("/update/result", methods=["GET"])
@admin_required
def get_update_result_api():
"""读取 update/result.json最近一次更新执行结果"""
ensure_update_dirs()
result_path = get_update_result_path()
data, err = load_json_file(result_path)
if err:
return jsonify({"ok": False, "error": f"读取 result 失败: {err}", "data": data}), 200
if not data:
return jsonify({"ok": True, "data": None}), 200
return jsonify({"ok": True, "data": data}), 200
@admin_api_bp.route("/update/log", methods=["GET"])
@admin_required
def get_update_log_api():
"""读取 update/jobs/<job_id>.log 的末尾内容(用于后台展示进度)。"""
ensure_update_dirs()
job_id = sanitize_job_id(request.args.get("job_id"))
if not job_id:
# 若未指定,则尝试用 result.json 的 job_id
result_data, _ = load_json_file(get_update_result_path())
job_id = sanitize_job_id(result_data.get("job_id") if isinstance(result_data, dict) else None)
if not job_id:
return jsonify({"ok": True, "job_id": None, "log": "", "truncated": False}), 200
max_bytes = request.args.get("max_bytes", "200000")
try:
max_bytes_i = int(max_bytes)
except Exception:
max_bytes_i = 200_000
max_bytes_i = max(10_000, min(2_000_000, max_bytes_i))
log_path = get_update_job_log_path(job_id)
text, truncated = tail_text_file(log_path, max_bytes=max_bytes_i)
return jsonify({"ok": True, "job_id": job_id, "log": text, "truncated": truncated}), 200
@admin_api_bp.route("/update/check", methods=["POST"])
@admin_required
def request_update_check_api():
"""请求宿主机 Update-Agent 立刻执行一次检查更新。"""
ensure_update_dirs()
if _has_pending_request():
return jsonify({"error": "已有更新请求正在处理中,请稍后再试"}), 409
job_id = _make_job_id(prefix="chk")
payload = {
"job_id": job_id,
"action": "check",
"requested_at": get_beijing_now().strftime("%Y-%m-%d %H:%M:%S"),
"requested_by": session.get("admin_username") or "",
"requested_ip": _request_ip(),
}
write_json_atomic(get_update_request_path(), payload)
return jsonify({"success": True, "job_id": job_id}), 200
@admin_api_bp.route("/update/run", methods=["POST"])
@admin_required
def request_update_run_api():
"""请求宿主机 Update-Agent 执行一键更新并重启服务。"""
ensure_update_dirs()
if _has_pending_request():
return jsonify({"error": "已有更新请求正在处理中,请稍后再试"}), 409
data = request.json or {}
try:
build_no_cache = _parse_bool_field(data, "build_no_cache")
if build_no_cache is None:
build_no_cache = _parse_bool_field(data, "no_cache")
build_pull = _parse_bool_field(data, "build_pull")
if build_pull is None:
build_pull = _parse_bool_field(data, "pull")
except ValueError as e:
return jsonify({"error": str(e)}), 400
job_id = _make_job_id(prefix="upd")
payload = {
"job_id": job_id,
"action": "update",
"requested_at": get_beijing_now().strftime("%Y-%m-%d %H:%M:%S"),
"requested_by": session.get("admin_username") or "",
"requested_ip": _request_ip(),
"build_no_cache": bool(build_no_cache) if build_no_cache is not None else False,
"build_pull": bool(build_pull) if build_pull is not None else False,
}
write_json_atomic(get_update_request_path(), payload)
return jsonify(
{
"success": True,
"job_id": job_id,
"message": "已提交更新请求服务将重启页面可能短暂不可用请等待1-2分钟后刷新",
}
), 200

View File

@@ -1,149 +0,0 @@
#!/usr/bin/env python3
# -*- coding: utf-8 -*-
from __future__ import annotations
import database
from flask import jsonify, request
from routes.admin_api import admin_api_bp
from routes.decorators import admin_required
from services.state import safe_clear_user_logs, safe_remove_user_accounts
# ==================== 用户管理/统计(管理员) ====================
def _parse_optional_pagination(default_limit: int = 50, max_limit: int = 500) -> tuple[int | None, int]:
limit_raw = request.args.get("limit")
offset_raw = request.args.get("offset")
if (limit_raw is None) and (offset_raw is None):
return None, 0
try:
limit = int(limit_raw if limit_raw is not None else default_limit)
except (TypeError, ValueError):
limit = default_limit
limit = max(1, min(limit, max_limit))
try:
offset = int(offset_raw if offset_raw is not None else 0)
except (TypeError, ValueError):
offset = 0
offset = max(0, offset)
return limit, offset
@admin_api_bp.route("/users", methods=["GET"])
@admin_required
def get_all_users():
"""获取所有用户"""
limit, offset = _parse_optional_pagination()
if limit is None:
users = database.get_all_users()
return jsonify(users)
users = database.get_all_users(limit=limit, offset=offset)
total = database.get_users_count()
return jsonify({"items": users, "total": total, "limit": limit, "offset": offset})
@admin_api_bp.route("/users/pending", methods=["GET"])
@admin_required
def get_pending_users():
"""获取待审核用户"""
limit, offset = _parse_optional_pagination(default_limit=30, max_limit=200)
if limit is None:
users = database.get_pending_users()
return jsonify(users)
users = database.get_pending_users(limit=limit, offset=offset)
total = database.get_users_count(status="pending")
return jsonify({"items": users, "total": total, "limit": limit, "offset": offset})
@admin_api_bp.route("/users/<int:user_id>/approve", methods=["POST"])
@admin_required
def approve_user_route(user_id):
"""审核通过用户"""
if database.approve_user(user_id):
return jsonify({"success": True})
return jsonify({"error": "审核失败"}), 400
@admin_api_bp.route("/users/<int:user_id>/reject", methods=["POST"])
@admin_required
def reject_user_route(user_id):
"""拒绝用户"""
if database.reject_user(user_id):
return jsonify({"success": True})
return jsonify({"error": "拒绝失败"}), 400
@admin_api_bp.route("/users/<int:user_id>", methods=["DELETE"])
@admin_required
def delete_user_route(user_id):
"""删除用户"""
if database.delete_user(user_id):
safe_remove_user_accounts(user_id)
safe_clear_user_logs(user_id)
return jsonify({"success": True})
return jsonify({"error": "删除失败"}), 400
# ==================== VIP 管理(管理员) ====================
@admin_api_bp.route("/vip/config", methods=["GET"])
@admin_required
def get_vip_config_api():
"""获取VIP配置"""
config = database.get_vip_config()
return jsonify(config)
@admin_api_bp.route("/vip/config", methods=["POST"])
@admin_required
def set_vip_config_api():
"""设置默认VIP天数"""
data = request.json or {}
days = data.get("default_vip_days", 0)
if not isinstance(days, int) or days < 0:
return jsonify({"error": "VIP天数必须是非负整数"}), 400
database.set_default_vip_days(days)
return jsonify({"message": "VIP配置已更新", "default_vip_days": days})
@admin_api_bp.route("/users/<int:user_id>/vip", methods=["POST"])
@admin_required
def set_user_vip_api(user_id):
"""设置用户VIP"""
data = request.json or {}
days = data.get("days", 30)
valid_days = [7, 30, 365, 999999]
if days not in valid_days:
return jsonify({"error": "VIP天数必须是 7/30/365/999999 之一"}), 400
if database.set_user_vip(user_id, days):
vip_type = {7: "一周", 30: "一个月", 365: "一年", 999999: "永久"}[days]
return jsonify({"message": f"VIP设置成功: {vip_type}"})
return jsonify({"error": "设置失败,用户不存在"}), 400
@admin_api_bp.route("/users/<int:user_id>/vip", methods=["DELETE"])
@admin_required
def remove_user_vip_api(user_id):
"""移除用户VIP"""
if database.remove_user_vip(user_id):
return jsonify({"message": "VIP已移除"})
return jsonify({"error": "移除失败"}), 400
@admin_api_bp.route("/users/<int:user_id>/vip", methods=["GET"])
@admin_required
def get_user_vip_info_api(user_id):
"""获取用户VIP信息(管理员)"""
vip_info = database.get_user_vip_info(user_id)
return jsonify(vip_info)

View File

@@ -40,48 +40,6 @@ def _emit(event: str, data: object, *, room: str | None = None) -> None:
pass pass
def _emit_account_update(user_id: int, account) -> None:
_emit("account_update", account.to_dict(), room=f"user_{user_id}")
def _request_json(default=None):
if default is None:
default = {}
data = request.get_json(silent=True)
return data if isinstance(data, dict) else default
def _ensure_accounts_loaded(user_id: int) -> dict:
accounts = safe_get_user_accounts_snapshot(user_id)
if accounts:
return accounts
load_user_accounts(user_id)
return safe_get_user_accounts_snapshot(user_id)
def _get_user_account(user_id: int, account_id: str, *, refresh_if_missing: bool = False):
account = safe_get_account(user_id, account_id)
if account or (not refresh_if_missing):
return account
load_user_accounts(user_id)
return safe_get_account(user_id, account_id)
def _validate_browse_type_input(raw_browse_type, *, default=BROWSE_TYPE_SHOULD_READ):
browse_type = validate_browse_type(raw_browse_type, default=default)
if not browse_type:
return None, (jsonify({"error": "浏览类型无效"}), 400)
return browse_type, None
def _cancel_pending_account_task(user_id: int, account_id: str) -> bool:
try:
scheduler = get_task_scheduler()
return bool(scheduler.cancel_pending_task(user_id=user_id, account_id=account_id))
except Exception:
return False
@api_accounts_bp.route("/api/accounts", methods=["GET"]) @api_accounts_bp.route("/api/accounts", methods=["GET"])
@login_required @login_required
def get_accounts(): def get_accounts():
@@ -91,7 +49,8 @@ def get_accounts():
accounts = safe_get_user_accounts_snapshot(user_id) accounts = safe_get_user_accounts_snapshot(user_id)
if refresh or not accounts: if refresh or not accounts:
accounts = _ensure_accounts_loaded(user_id) load_user_accounts(user_id)
accounts = safe_get_user_accounts_snapshot(user_id)
return jsonify([acc.to_dict() for acc in accounts.values()]) return jsonify([acc.to_dict() for acc in accounts.values()])
@@ -104,18 +63,20 @@ def add_account():
current_count = len(database.get_user_accounts(user_id)) current_count = len(database.get_user_accounts(user_id))
is_vip = database.is_user_vip(user_id) is_vip = database.is_user_vip(user_id)
if (not is_vip) and current_count >= 3: if not is_vip and current_count >= 3:
return jsonify({"error": "普通用户最多添加3个账号升级VIP可无限添加"}), 403 return jsonify({"error": "普通用户最多添加3个账号升级VIP可无限添加"}), 403
data = request.json
data = _request_json() username = data.get("username", "").strip()
username = str(data.get("username", "")).strip() password = data.get("password", "").strip()
password = str(data.get("password", "")).strip() remark = data.get("remark", "").strip()[:200]
remark = str(data.get("remark", "")).strip()[:200]
if not username or not password: if not username or not password:
return jsonify({"error": "用户名和密码不能为空"}), 400 return jsonify({"error": "用户名和密码不能为空"}), 400
accounts = _ensure_accounts_loaded(user_id) accounts = safe_get_user_accounts_snapshot(user_id)
if not accounts:
load_user_accounts(user_id)
accounts = safe_get_user_accounts_snapshot(user_id)
for acc in accounts.values(): for acc in accounts.values():
if acc.username == username: if acc.username == username:
return jsonify({"error": f"账号 '{username}' 已存在"}), 400 return jsonify({"error": f"账号 '{username}' 已存在"}), 400
@@ -131,7 +92,7 @@ def add_account():
safe_set_account(user_id, account_id, account) safe_set_account(user_id, account_id, account)
log_to_client(f"添加账号: {username}", user_id) log_to_client(f"添加账号: {username}", user_id)
_emit_account_update(user_id, account) _emit("account_update", account.to_dict(), room=f"user_{user_id}")
return jsonify(account.to_dict()) return jsonify(account.to_dict())
@@ -142,15 +103,15 @@ def update_account(account_id):
"""更新账号信息(密码等)""" """更新账号信息(密码等)"""
user_id = current_user.id user_id = current_user.id
account = _get_user_account(user_id, account_id) account = safe_get_account(user_id, account_id)
if not account: if not account:
return jsonify({"error": "账号不存在"}), 404 return jsonify({"error": "账号不存在"}), 404
if account.is_running: if account.is_running:
return jsonify({"error": "账号正在运行中,请先停止"}), 400 return jsonify({"error": "账号正在运行中,请先停止"}), 400
data = _request_json() data = request.json
new_password = str(data.get("password", "")).strip() new_password = data.get("password", "").strip()
new_remember = data.get("remember", account.remember) new_remember = data.get("remember", account.remember)
if not new_password: if not new_password:
@@ -164,13 +125,11 @@ def update_account(account_id):
""" """
UPDATE accounts UPDATE accounts
SET password = ?, remember = ? SET password = ?, remember = ?
WHERE id = ? AND user_id = ? WHERE id = ?
""", """,
(encrypted_password, new_remember, account_id, user_id), (encrypted_password, new_remember, account_id),
) )
conn.commit() conn.commit()
if cursor.rowcount <= 0:
return jsonify({"error": "账号不存在或无权限"}), 404
database.reset_account_login_status(account_id) database.reset_account_login_status(account_id)
logger.info(f"[账号更新] 用户 {user_id} 修改了账号 {account.username} 的密码,已重置登录状态") logger.info(f"[账号更新] 用户 {user_id} 修改了账号 {account.username} 的密码,已重置登录状态")
@@ -188,7 +147,7 @@ def delete_account(account_id):
"""删除账号""" """删除账号"""
user_id = current_user.id user_id = current_user.id
account = _get_user_account(user_id, account_id) account = safe_get_account(user_id, account_id)
if not account: if not account:
return jsonify({"error": "账号不存在"}), 404 return jsonify({"error": "账号不存在"}), 404
@@ -200,6 +159,7 @@ def delete_account(account_id):
username = account.username username = account.username
database.delete_account(account_id) database.delete_account(account_id)
safe_remove_account(user_id, account_id) safe_remove_account(user_id, account_id)
log_to_client(f"删除账号: {username}", user_id) log_to_client(f"删除账号: {username}", user_id)
@@ -236,12 +196,12 @@ def update_remark(account_id):
"""更新备注""" """更新备注"""
user_id = current_user.id user_id = current_user.id
account = _get_user_account(user_id, account_id) account = safe_get_account(user_id, account_id)
if not account: if not account:
return jsonify({"error": "账号不存在"}), 404 return jsonify({"error": "账号不存在"}), 404
data = _request_json() data = request.json
remark = str(data.get("remark", "")).strip()[:200] remark = data.get("remark", "").strip()[:200]
database.update_account_remark(account_id, remark) database.update_account_remark(account_id, remark)
@@ -257,18 +217,17 @@ def start_account(account_id):
"""启动账号任务""" """启动账号任务"""
user_id = current_user.id user_id = current_user.id
account = _get_user_account(user_id, account_id) account = safe_get_account(user_id, account_id)
if not account: if not account:
return jsonify({"error": "账号不存在"}), 404 return jsonify({"error": "账号不存在"}), 404
if account.is_running: if account.is_running:
return jsonify({"error": "任务已在运行中"}), 400 return jsonify({"error": "任务已在运行中"}), 400
data = _request_json() data = request.json or {}
browse_type, browse_error = _validate_browse_type_input(data.get("browse_type"), default=BROWSE_TYPE_SHOULD_READ) browse_type = validate_browse_type(data.get("browse_type"), default=BROWSE_TYPE_SHOULD_READ)
if browse_error: if not browse_type:
return browse_error return jsonify({"error": "浏览类型无效"}), 400
enable_screenshot = data.get("enable_screenshot", True) enable_screenshot = data.get("enable_screenshot", True)
ok, message = submit_account_task( ok, message = submit_account_task(
user_id=user_id, user_id=user_id,
@@ -290,7 +249,7 @@ def stop_account(account_id):
"""停止账号任务""" """停止账号任务"""
user_id = current_user.id user_id = current_user.id
account = _get_user_account(user_id, account_id) account = safe_get_account(user_id, account_id)
if not account: if not account:
return jsonify({"error": "账号不存在"}), 404 return jsonify({"error": "账号不存在"}), 404
@@ -300,16 +259,20 @@ def stop_account(account_id):
account.should_stop = True account.should_stop = True
account.status = "正在停止" account.status = "正在停止"
if _cancel_pending_account_task(user_id, account_id): try:
account.status = "已停止" scheduler = get_task_scheduler()
account.is_running = False if scheduler.cancel_pending_task(user_id=user_id, account_id=account_id):
safe_remove_task_status(account_id) account.status = "已停止"
_emit_account_update(user_id, account) account.is_running = False
log_to_client(f"任务已取消: {account.username}", user_id) safe_remove_task_status(account_id)
return jsonify({"success": True, "canceled": True}) _emit("account_update", account.to_dict(), room=f"user_{user_id}")
log_to_client(f"任务已取消: {account.username}", user_id)
return jsonify({"success": True, "canceled": True})
except Exception:
pass
log_to_client(f"停止任务: {account.username}", user_id) log_to_client(f"停止任务: {account.username}", user_id)
_emit_account_update(user_id, account) _emit("account_update", account.to_dict(), room=f"user_{user_id}")
return jsonify({"success": True}) return jsonify({"success": True})
@@ -320,20 +283,23 @@ def manual_screenshot(account_id):
"""手动为指定账号截图""" """手动为指定账号截图"""
user_id = current_user.id user_id = current_user.id
account = _get_user_account(user_id, account_id, refresh_if_missing=True) account = safe_get_account(user_id, account_id)
if not account:
load_user_accounts(user_id)
account = safe_get_account(user_id, account_id)
if not account: if not account:
return jsonify({"error": "账号不存在"}), 404 return jsonify({"error": "账号不存在"}), 404
if account.is_running: if account.is_running:
return jsonify({"error": "任务运行中,无法截图"}), 400 return jsonify({"error": "任务运行中,无法截图"}), 400
data = _request_json() data = request.json or {}
requested_browse_type = data.get("browse_type", None) requested_browse_type = data.get("browse_type", None)
if requested_browse_type is None: if requested_browse_type is None:
browse_type = normalize_browse_type(account.last_browse_type) browse_type = normalize_browse_type(account.last_browse_type)
else: else:
browse_type, browse_error = _validate_browse_type_input(requested_browse_type, default=BROWSE_TYPE_SHOULD_READ) browse_type = validate_browse_type(requested_browse_type, default=BROWSE_TYPE_SHOULD_READ)
if browse_error: if not browse_type:
return browse_error return jsonify({"error": "浏览类型无效"}), 400
account.last_browse_type = browse_type account.last_browse_type = browse_type
@@ -351,16 +317,12 @@ def manual_screenshot(account_id):
def batch_start_accounts(): def batch_start_accounts():
"""批量启动账号""" """批量启动账号"""
user_id = current_user.id user_id = current_user.id
data = _request_json() data = request.json or {}
account_ids = data.get("account_ids", []) account_ids = data.get("account_ids", [])
browse_type, browse_error = _validate_browse_type_input( browse_type = validate_browse_type(data.get("browse_type", BROWSE_TYPE_SHOULD_READ), default=BROWSE_TYPE_SHOULD_READ)
data.get("browse_type", BROWSE_TYPE_SHOULD_READ), if not browse_type:
default=BROWSE_TYPE_SHOULD_READ, return jsonify({"error": "浏览类型无效"}), 400
)
if browse_error:
return browse_error
enable_screenshot = data.get("enable_screenshot", True) enable_screenshot = data.get("enable_screenshot", True)
if not account_ids: if not account_ids:
@@ -369,10 +331,11 @@ def batch_start_accounts():
started = [] started = []
failed = [] failed = []
_ensure_accounts_loaded(user_id) if not safe_get_user_accounts_snapshot(user_id):
load_user_accounts(user_id)
for account_id in account_ids: for account_id in account_ids:
account = _get_user_account(user_id, account_id) account = safe_get_account(user_id, account_id)
if not account: if not account:
failed.append({"id": account_id, "reason": "账号不存在"}) failed.append({"id": account_id, "reason": "账号不存在"})
continue continue
@@ -394,13 +357,7 @@ def batch_start_accounts():
failed.append({"id": account_id, "reason": msg}) failed.append({"id": account_id, "reason": msg})
return jsonify( return jsonify(
{ {"success": True, "started_count": len(started), "failed_count": len(failed), "started": started, "failed": failed}
"success": True,
"started_count": len(started),
"failed_count": len(failed),
"started": started,
"failed": failed,
}
) )
@@ -409,29 +366,39 @@ def batch_start_accounts():
def batch_stop_accounts(): def batch_stop_accounts():
"""批量停止账号""" """批量停止账号"""
user_id = current_user.id user_id = current_user.id
data = _request_json() data = request.json
account_ids = data.get("account_ids", []) account_ids = data.get("account_ids", [])
if not account_ids: if not account_ids:
return jsonify({"error": "请选择要停止的账号"}), 400 return jsonify({"error": "请选择要停止的账号"}), 400
stopped = [] stopped = []
_ensure_accounts_loaded(user_id)
if not safe_get_user_accounts_snapshot(user_id):
load_user_accounts(user_id)
for account_id in account_ids: for account_id in account_ids:
account = _get_user_account(user_id, account_id) account = safe_get_account(user_id, account_id)
if (not account) or (not account.is_running): if not account:
continue
if not account.is_running:
continue continue
account.should_stop = True account.should_stop = True
account.status = "正在停止" account.status = "正在停止"
stopped.append(account_id) stopped.append(account_id)
if _cancel_pending_account_task(user_id, account_id): try:
account.status = "已停止" scheduler = get_task_scheduler()
account.is_running = False if scheduler.cancel_pending_task(user_id=user_id, account_id=account_id):
safe_remove_task_status(account_id) account.status = "已停止"
account.is_running = False
safe_remove_task_status(account_id)
except Exception:
pass
_emit_account_update(user_id, account) _emit("account_update", account.to_dict(), room=f"user_{user_id}")
return jsonify({"success": True, "stopped_count": len(stopped), "stopped": stopped}) return jsonify({"success": True, "stopped_count": len(stopped), "stopped": stopped})

View File

@@ -2,34 +2,20 @@
# -*- coding: utf-8 -*- # -*- coding: utf-8 -*-
from __future__ import annotations from __future__ import annotations
import base64
import json
import random import random
import secrets import secrets
import threading
import time import time
import uuid
from io import BytesIO
import database import database
import email_service import email_service
from app_config import get_config from app_config import get_config
from app_logger import get_logger from app_logger import get_logger
from app_security import get_rate_limit_ip, require_ip_not_locked, validate_email, validate_password, validate_username from app_security import get_rate_limit_ip, require_ip_not_locked, validate_email, validate_password, validate_username
from flask import Blueprint, jsonify, request, session from flask import Blueprint, jsonify, redirect, render_template, request, url_for
from flask_login import login_required, login_user, logout_user from flask_login import login_required, login_user, logout_user
from routes.pages import render_app_spa_or_legacy from routes.pages import render_app_spa_or_legacy
from services.accounts_service import load_user_accounts from services.accounts_service import load_user_accounts
from services.models import User from services.models import User
from services.passkeys import (
encode_credential_id,
get_expected_origins,
get_rp_id,
is_challenge_valid,
make_authentication_options,
normalize_device_name,
verify_authentication,
)
from services.state import ( from services.state import (
check_ip_request_rate, check_ip_request_rate,
check_email_rate_limit, check_email_rate_limit,
@@ -53,176 +39,12 @@ config = get_config()
api_auth_bp = Blueprint("api_auth", __name__) api_auth_bp = Blueprint("api_auth", __name__)
_CAPTCHA_FONT_PATHS = [
"/usr/share/fonts/truetype/liberation/LiberationSans-Bold.ttf",
"/usr/share/fonts/truetype/dejavu/DejaVuSans-Bold.ttf",
"/usr/share/fonts/truetype/freefont/FreeSansBold.ttf",
]
_CAPTCHA_FONT = None
_CAPTCHA_FONT_LOCK = threading.Lock()
_USER_PASSKEY_LOGIN_SESSION_KEY = "user_passkey_login_state"
def _get_json_payload() -> dict:
data = request.get_json(silent=True)
return data if isinstance(data, dict) else {}
def _load_captcha_font(image_font_module):
global _CAPTCHA_FONT
if _CAPTCHA_FONT is not None:
return _CAPTCHA_FONT
with _CAPTCHA_FONT_LOCK:
if _CAPTCHA_FONT is not None:
return _CAPTCHA_FONT
for font_path in _CAPTCHA_FONT_PATHS:
try:
_CAPTCHA_FONT = image_font_module.truetype(font_path, 42)
break
except Exception:
continue
if _CAPTCHA_FONT is None:
_CAPTCHA_FONT = image_font_module.load_default()
return _CAPTCHA_FONT
def _generate_captcha_image_data_uri(code: str) -> str:
from PIL import Image, ImageDraw, ImageFont
width, height = 160, 60
image = Image.new("RGB", (width, height), color=(255, 255, 255))
draw = ImageDraw.Draw(image)
for _ in range(6):
x1 = random.randint(0, width)
y1 = random.randint(0, height)
x2 = random.randint(0, width)
y2 = random.randint(0, height)
draw.line(
[(x1, y1), (x2, y2)],
fill=(random.randint(0, 200), random.randint(0, 200), random.randint(0, 200)),
width=1,
)
for _ in range(80):
x = random.randint(0, width)
y = random.randint(0, height)
draw.point((x, y), fill=(random.randint(0, 200), random.randint(0, 200), random.randint(0, 200)))
font = _load_captcha_font(ImageFont)
for i, char in enumerate(code):
x = 12 + i * 35 + random.randint(-3, 3)
y = random.randint(5, 12)
color = (random.randint(0, 150), random.randint(0, 150), random.randint(0, 150))
draw.text((x, y), char, font=font, fill=color)
buffer = BytesIO()
image.save(buffer, format="PNG")
img_base64 = base64.b64encode(buffer.getvalue()).decode("utf-8")
return f"data:image/png;base64,{img_base64}"
def _with_vip_suffix(message: str, auto_approve_enabled: bool, auto_approve_vip_days: int) -> str:
if auto_approve_enabled and auto_approve_vip_days > 0:
return f"{message},赠送{auto_approve_vip_days}天VIP"
return message
def _verify_common_captcha(client_ip: str, captcha_session: str, captcha_code: str):
success, message = safe_verify_and_consume_captcha(captcha_session, captcha_code)
if success:
return True, None
is_locked = record_failed_captcha(client_ip)
if is_locked:
return False, (jsonify({"error": "验证码错误次数过多,IP已被锁定1小时"}), 429)
return False, (jsonify({"error": message}), 400)
def _verify_login_captcha_if_needed(
*,
captcha_required: bool,
captcha_session: str,
captcha_code: str,
client_ip: str,
username_key: str,
):
if not captcha_required:
return True, None
if not captcha_session or not captcha_code:
return False, (jsonify({"error": "请填写验证码", "need_captcha": True}), 400)
success, message = safe_verify_and_consume_captcha(captcha_session, captcha_code)
if success:
return True, None
record_login_failure(client_ip, username_key)
return False, (jsonify({"error": message, "need_captcha": True}), 400)
def _send_password_reset_email_if_possible(email: str, username: str, user_id: int) -> None:
result = email_service.send_password_reset_email(email=email, username=username, user_id=user_id)
if not result["success"]:
logger.error(f"密码重置邮件发送失败: {result['error']}")
def _send_login_security_alert_if_needed(user: dict, username: str, client_ip: str) -> None:
try:
user_agent = request.headers.get("User-Agent", "")
context = database.record_login_context(user["id"], client_ip, user_agent)
if not context or (not context.get("new_ip") and not context.get("new_device")):
return
if not config.LOGIN_ALERT_ENABLED:
return
if not should_send_login_alert(user["id"], client_ip):
return
if not email_service.get_email_settings().get("login_alert_enabled", True):
return
user_info = database.get_user_by_id(user["id"]) or {}
if (not user_info.get("email")) or (not user_info.get("email_verified")):
return
if not database.get_user_email_notify(user["id"]):
return
email_service.send_security_alert_email(
email=user_info.get("email"),
username=user_info.get("username") or username,
ip_address=client_ip,
user_agent=user_agent,
new_ip=context.get("new_ip", False),
new_device=context.get("new_device", False),
user_id=user["id"],
)
except Exception as e:
logger.warning(f"发送登录安全提醒失败: user_id={user.get('id')}, error={e}")
def _parse_credential_payload(data: dict) -> dict | None:
credential = data.get("credential")
if isinstance(credential, dict):
return credential
if isinstance(credential, str):
try:
parsed = json.loads(credential)
return parsed if isinstance(parsed, dict) else None
except Exception:
return None
return None
@api_auth_bp.route("/api/register", methods=["POST"]) @api_auth_bp.route("/api/register", methods=["POST"])
@require_ip_not_locked @require_ip_not_locked
def register(): def register():
"""用户注册""" """用户注册"""
data = _get_json_payload() data = request.json or {}
username = data.get("username", "").strip() username = data.get("username", "").strip()
password = data.get("password", "").strip() password = data.get("password", "").strip()
email = data.get("email", "").strip().lower() email = data.get("email", "").strip().lower()
@@ -245,9 +67,12 @@ def register():
if not allowed: if not allowed:
return jsonify({"error": error_msg}), 429 return jsonify({"error": error_msg}), 429
captcha_ok, captcha_error_response = _verify_common_captcha(client_ip, captcha_session, captcha_code) success, message = safe_verify_and_consume_captcha(captcha_session, captcha_code)
if not captcha_ok: if not success:
return captcha_error_response is_locked = record_failed_captcha(client_ip)
if is_locked:
return jsonify({"error": "验证码错误次数过多,IP已被锁定1小时"}), 429
return jsonify({"error": message}), 400
email_settings = email_service.get_email_settings() email_settings = email_service.get_email_settings()
email_verify_enabled = email_settings.get("register_verify_enabled", False) and email_settings.get("enabled", False) email_verify_enabled = email_settings.get("register_verify_enabled", False) and email_settings.get("enabled", False)
@@ -280,22 +105,20 @@ def register():
if email_verify_enabled and email: if email_verify_enabled and email:
result = email_service.send_register_verification_email(email=email, username=username, user_id=user_id) result = email_service.send_register_verification_email(email=email, username=username, user_id=user_id)
if result["success"]: if result["success"]:
message = _with_vip_suffix( message = "注册成功!验证邮件已发送(可直接登录,建议完成邮箱验证)"
"注册成功!验证邮件已发送(可直接登录,建议完成邮箱验证)", if auto_approve_enabled and auto_approve_vip_days > 0:
auto_approve_enabled, message += f",赠送{auto_approve_vip_days}天VIP"
auto_approve_vip_days,
)
return jsonify({"success": True, "message": message, "need_verify": True}) return jsonify({"success": True, "message": message, "need_verify": True})
logger.error(f"注册验证邮件发送失败: {result['error']}") logger.error(f"注册验证邮件发送失败: {result['error']}")
message = _with_vip_suffix( message = f"注册成功,但验证邮件发送失败({result['error']})。你仍可直接登录"
f"注册成功,但验证邮件发送失败({result['error']})。你仍可直接登录", if auto_approve_enabled and auto_approve_vip_days > 0:
auto_approve_enabled, message += f",赠送{auto_approve_vip_days}天VIP"
auto_approve_vip_days,
)
return jsonify({"success": True, "message": message, "need_verify": True}) return jsonify({"success": True, "message": message, "need_verify": True})
message = _with_vip_suffix("注册成功!可直接登录", auto_approve_enabled, auto_approve_vip_days) message = "注册成功!可直接登录"
if auto_approve_enabled and auto_approve_vip_days > 0:
message += f",赠送{auto_approve_vip_days}天VIP"
return jsonify({"success": True, "message": message}) return jsonify({"success": True, "message": message})
return jsonify({"error": "用户名已存在"}), 400 return jsonify({"error": "用户名已存在"}), 400
@@ -303,38 +126,20 @@ def register():
@api_auth_bp.route("/api/verify-email/<token>") @api_auth_bp.route("/api/verify-email/<token>")
def verify_email(token): def verify_email(token):
"""验证邮箱 - 用户点击邮件中的链接""" """验证邮箱 - 用户点击邮件中的链接"""
result = email_service.verify_email_token(token, email_service.EMAIL_TYPE_REGISTER, consume=False) result = email_service.verify_email_token(token, email_service.EMAIL_TYPE_REGISTER)
if result: if result:
token_id = result["token_id"]
user_id = result["user_id"] user_id = result["user_id"]
email = result["email"]
if not database.approve_user(user_id): database.approve_user(user_id)
logger.error(f"用户邮箱验证失败: 用户审核更新失败 user_id={user_id}")
error_message = "验证处理失败,请稍后重试"
spa_initial_state = {
"page": "verify_result",
"success": False,
"title": "验证失败",
"error_message": error_message,
"primary_label": "返回登录",
"primary_url": "/login",
}
return render_app_spa_or_legacy(
"verify_failed.html",
legacy_context={"error_message": error_message},
spa_initial_state=spa_initial_state,
)
system_config = database.get_system_config() system_config = database.get_system_config()
auto_approve_vip_days = system_config.get("auto_approve_vip_days", 7) auto_approve_vip_days = system_config.get("auto_approve_vip_days", 7)
if auto_approve_vip_days > 0: if auto_approve_vip_days > 0:
database.set_user_vip(user_id, auto_approve_vip_days) database.set_user_vip(user_id, auto_approve_vip_days)
if not email_service.consume_email_token(token_id): logger.info(f"用户邮箱验证成功: user_id={user_id}, email={email}")
logger.warning(f"用户邮箱验证后Token消费失败: user_id={user_id}")
logger.info(f"用户邮箱验证成功: user_id={user_id}")
spa_initial_state = { spa_initial_state = {
"page": "verify_result", "page": "verify_result",
"success": True, "success": True,
@@ -347,7 +152,7 @@ def verify_email(token):
} }
return render_app_spa_or_legacy("verify_success.html", spa_initial_state=spa_initial_state) return render_app_spa_or_legacy("verify_success.html", spa_initial_state=spa_initial_state)
logger.warning("邮箱验证失败: token无效或已过期") logger.warning(f"邮箱验证失败: token={token[:20]}...")
error_message = "验证链接无效或已过期,请重新注册或申请重发验证邮件" error_message = "验证链接无效或已过期,请重新注册或申请重发验证邮件"
spa_initial_state = { spa_initial_state = {
"page": "verify_result", "page": "verify_result",
@@ -370,7 +175,7 @@ def verify_email(token):
@require_ip_not_locked @require_ip_not_locked
def resend_verify_email(): def resend_verify_email():
"""重发验证邮件""" """重发验证邮件"""
data = _get_json_payload() data = request.json or {}
email = data.get("email", "").strip().lower() email = data.get("email", "").strip().lower()
captcha_session = data.get("captcha_session", "") captcha_session = data.get("captcha_session", "")
captcha_code = data.get("captcha", "").strip() captcha_code = data.get("captcha", "").strip()
@@ -390,9 +195,12 @@ def resend_verify_email():
if not allowed: if not allowed:
return jsonify({"error": error_msg}), 429 return jsonify({"error": error_msg}), 429
captcha_ok, captcha_error_response = _verify_common_captcha(client_ip, captcha_session, captcha_code) success, message = safe_verify_and_consume_captcha(captcha_session, captcha_code)
if not captcha_ok: if not success:
return captcha_error_response is_locked = record_failed_captcha(client_ip)
if is_locked:
return jsonify({"error": "验证码错误次数过多,IP已被锁定1小时"}), 429
return jsonify({"error": message}), 400
user = database.get_user_by_email(email) user = database.get_user_by_email(email)
if not user: if not user:
@@ -427,7 +235,7 @@ def get_email_verify_status():
@require_ip_not_locked @require_ip_not_locked
def forgot_password(): def forgot_password():
"""发送密码重置邮件""" """发送密码重置邮件"""
data = _get_json_payload() data = request.json or {}
email = data.get("email", "").strip().lower() email = data.get("email", "").strip().lower()
username = data.get("username", "").strip() username = data.get("username", "").strip()
captcha_session = data.get("captcha_session", "") captcha_session = data.get("captcha_session", "")
@@ -455,9 +263,12 @@ def forgot_password():
if not allowed: if not allowed:
return jsonify({"error": error_msg}), 429 return jsonify({"error": error_msg}), 429
captcha_ok, captcha_error_response = _verify_common_captcha(client_ip, captcha_session, captcha_code) success, message = safe_verify_and_consume_captcha(captcha_session, captcha_code)
if not captcha_ok: if not success:
return captcha_error_response is_locked = record_failed_captcha(client_ip)
if is_locked:
return jsonify({"error": "验证码错误次数过多,IP已被锁定1小时"}), 429
return jsonify({"error": message}), 400
email_settings = email_service.get_email_settings() email_settings = email_service.get_email_settings()
if not email_settings.get("enabled", False): if not email_settings.get("enabled", False):
@@ -482,16 +293,20 @@ def forgot_password():
if not allowed: if not allowed:
return jsonify({"error": error_msg}), 429 return jsonify({"error": error_msg}), 429
_send_password_reset_email_if_possible( result = email_service.send_password_reset_email(
email=bound_email, email=bound_email,
username=user["username"], username=user["username"],
user_id=user["id"], user_id=user["id"],
) )
if not result["success"]:
logger.error(f"密码重置邮件发送失败: {result['error']}")
return jsonify({"success": True, "message": "如果该账号已绑定邮箱,您将收到密码重置邮件"}) return jsonify({"success": True, "message": "如果该账号已绑定邮箱,您将收到密码重置邮件"})
user = database.get_user_by_email(email) user = database.get_user_by_email(email)
if user and user.get("status") == "approved": if user and user.get("status") == "approved":
_send_password_reset_email_if_possible(email=email, username=user["username"], user_id=user["id"]) result = email_service.send_password_reset_email(email=email, username=user["username"], user_id=user["id"])
if not result["success"]:
logger.error(f"密码重置邮件发送失败: {result['error']}")
return jsonify({"success": True, "message": "如果该邮箱已注册,您将收到密码重置邮件"}) return jsonify({"success": True, "message": "如果该邮箱已注册,您将收到密码重置邮件"})
@@ -516,7 +331,7 @@ def reset_password_page(token):
@api_auth_bp.route("/api/reset-password-confirm", methods=["POST"]) @api_auth_bp.route("/api/reset-password-confirm", methods=["POST"])
def reset_password_confirm(): def reset_password_confirm():
"""确认密码重置""" """确认密码重置"""
data = _get_json_payload() data = request.json or {}
token = data.get("token", "").strip() token = data.get("token", "").strip()
new_password = data.get("new_password", "").strip() new_password = data.get("new_password", "").strip()
@@ -541,191 +356,78 @@ def reset_password_confirm():
@api_auth_bp.route("/api/generate_captcha", methods=["POST"]) @api_auth_bp.route("/api/generate_captcha", methods=["POST"])
def generate_captcha(): def generate_captcha():
"""生成4位数字验证码图片""" """生成4位数字验证码图片"""
client_ip = get_rate_limit_ip() import base64
allowed, error_msg = check_ip_request_rate(client_ip, "login") import uuid
if not allowed: from io import BytesIO
return jsonify({"error": error_msg}), 429
session_id = str(uuid.uuid4()) session_id = str(uuid.uuid4())
code = "".join(str(secrets.randbelow(10)) for _ in range(4))
code = "".join([str(secrets.randbelow(10)) for _ in range(4)])
safe_set_captcha(session_id, {"code": code, "expire_time": time.time() + 300, "failed_attempts": 0}) safe_set_captcha(session_id, {"code": code, "expire_time": time.time() + 300, "failed_attempts": 0})
safe_cleanup_expired_captcha() safe_cleanup_expired_captcha()
try: try:
captcha_image = _generate_captcha_image_data_uri(code) from PIL import Image, ImageDraw, ImageFont
return jsonify({"session_id": session_id, "captcha_image": captcha_image}) import io
width, height = 160, 60
image = Image.new("RGB", (width, height), color=(255, 255, 255))
draw = ImageDraw.Draw(image)
for _ in range(6):
x1 = random.randint(0, width)
y1 = random.randint(0, height)
x2 = random.randint(0, width)
y2 = random.randint(0, height)
draw.line(
[(x1, y1), (x2, y2)],
fill=(random.randint(0, 200), random.randint(0, 200), random.randint(0, 200)),
width=1,
)
for _ in range(80):
x = random.randint(0, width)
y = random.randint(0, height)
draw.point((x, y), fill=(random.randint(0, 200), random.randint(0, 200), random.randint(0, 200)))
font = None
font_paths = [
"/usr/share/fonts/truetype/liberation/LiberationSans-Bold.ttf",
"/usr/share/fonts/truetype/dejavu/DejaVuSans-Bold.ttf",
"/usr/share/fonts/truetype/freefont/FreeSansBold.ttf",
]
for font_path in font_paths:
try:
font = ImageFont.truetype(font_path, 42)
break
except Exception:
continue
if font is None:
font = ImageFont.load_default()
for i, char in enumerate(code):
x = 12 + i * 35 + random.randint(-3, 3)
y = random.randint(5, 12)
color = (random.randint(0, 150), random.randint(0, 150), random.randint(0, 150))
draw.text((x, y), char, font=font, fill=color)
buffer = io.BytesIO()
image.save(buffer, format="PNG")
img_base64 = base64.b64encode(buffer.getvalue()).decode("utf-8")
return jsonify({"session_id": session_id, "captcha_image": f"data:image/png;base64,{img_base64}"})
except ImportError as e: except ImportError as e:
logger.error(f"PIL库未安装验证码功能不可用: {e}") logger.error(f"PIL库未安装验证码功能不可用: {e}")
safe_delete_captcha(session_id) safe_delete_captcha(session_id)
return jsonify({"error": "验证码服务暂不可用请联系管理员安装PIL库"}), 503 return jsonify({"error": "验证码服务暂不可用请联系管理员安装PIL库"}), 503
@api_auth_bp.route("/api/passkeys/login/options", methods=["POST"])
@require_ip_not_locked
def user_passkey_login_options():
"""用户 Passkey 登录:获取 assertion challenge。"""
data = _get_json_payload()
username = str(data.get("username", "") or "").strip()
client_ip = get_rate_limit_ip()
mode = "named" if username else "discoverable"
username_key = f"passkey:{username}" if username else "passkey:discoverable"
is_locked, remaining = check_login_ip_user_locked(client_ip, username_key)
if is_locked:
wait_hint = f"{remaining // 60 + 1}分钟" if remaining >= 60 else f"{remaining}"
return jsonify({"error": f"账号短时锁定,请{wait_hint}后再试"}), 429
allowed, error_msg = check_ip_request_rate(client_ip, "login")
if not allowed:
return jsonify({"error": error_msg}), 429
allowed, error_msg = check_login_rate_limits(client_ip, username_key)
if not allowed:
return jsonify({"error": error_msg}), 429
user_id = 0
allow_credential_ids = []
if mode == "named":
user = database.get_user_by_username(username)
if not user or user.get("status") != "approved":
record_login_failure(client_ip, username_key)
return jsonify({"error": "账号或Passkey不可用"}), 400
user_id = int(user["id"])
passkeys = database.list_passkeys("user", user_id)
if not passkeys:
record_login_failure(client_ip, username_key)
return jsonify({"error": "该账号尚未绑定Passkey"}), 400
allow_credential_ids = [str(item.get("credential_id") or "").strip() for item in passkeys if item.get("credential_id")]
try:
rp_id = get_rp_id(request)
expected_origins = get_expected_origins(request)
except Exception as e:
logger.warning(f"[passkey] 生成登录 challenge 失败(mode={mode}, username={username or '-'}) : {e}")
return jsonify({"error": "Passkey配置异常请联系管理员"}), 500
options = make_authentication_options(rp_id=rp_id, allow_credential_ids=allow_credential_ids)
challenge = str(options.get("challenge") or "").strip()
if not challenge:
return jsonify({"error": "生成Passkey挑战失败"}), 500
session[_USER_PASSKEY_LOGIN_SESSION_KEY] = {
"mode": mode,
"username": username,
"user_id": int(user_id),
"challenge": challenge,
"rp_id": rp_id,
"expected_origins": expected_origins,
"username_key": username_key,
"created_at": time.time(),
}
session.modified = True
return jsonify({"publicKey": options})
@api_auth_bp.route("/api/passkeys/login/verify", methods=["POST"])
@require_ip_not_locked
def user_passkey_login_verify():
"""用户 Passkey 登录:校验 assertion 并登录。"""
data = _get_json_payload()
request_username = str(data.get("username", "") or "").strip()
credential = _parse_credential_payload(data)
if not credential:
return jsonify({"error": "Passkey参数缺失"}), 400
state = session.get(_USER_PASSKEY_LOGIN_SESSION_KEY) or {}
if not state:
return jsonify({"error": "Passkey挑战不存在或已过期请重试"}), 400
if not is_challenge_valid(state.get("created_at")):
session.pop(_USER_PASSKEY_LOGIN_SESSION_KEY, None)
return jsonify({"error": "Passkey挑战已过期请重试"}), 400
mode = str(state.get("mode") or "named")
if mode not in {"named", "discoverable"}:
session.pop(_USER_PASSKEY_LOGIN_SESSION_KEY, None)
return jsonify({"error": "Passkey状态异常请重试"}), 400
expected_username = str(state.get("username") or "").strip()
username = expected_username
if mode == "named":
if not expected_username:
session.pop(_USER_PASSKEY_LOGIN_SESSION_KEY, None)
return jsonify({"error": "Passkey状态异常请重试"}), 400
if request_username and request_username != expected_username:
return jsonify({"error": "用户名与挑战不匹配,请重试"}), 400
else:
username = request_username
client_ip = get_rate_limit_ip()
username_key = str(state.get("username_key") or "").strip() or (
f"passkey:{expected_username}" if mode == "named" else "passkey:discoverable"
)
is_locked, remaining = check_login_ip_user_locked(client_ip, username_key)
if is_locked:
wait_hint = f"{remaining // 60 + 1}分钟" if remaining >= 60 else f"{remaining}"
return jsonify({"error": f"账号短时锁定,请{wait_hint}后再试"}), 429
credential_id = str(credential.get("id") or credential.get("rawId") or "").strip()
if not credential_id:
return jsonify({"error": "Passkey参数无效"}), 400
passkey = database.get_passkey_by_credential_id(credential_id)
if not passkey:
record_login_failure(client_ip, username_key)
return jsonify({"error": "Passkey不存在或已删除"}), 401
if str(passkey.get("owner_type") or "") != "user":
record_login_failure(client_ip, username_key)
return jsonify({"error": "Passkey不属于用户账号"}), 401
if mode == "named" and int(passkey.get("owner_id") or 0) != int(state.get("user_id") or 0):
record_login_failure(client_ip, username_key)
return jsonify({"error": "Passkey与账号不匹配"}), 401
try:
parsed_credential, verified = verify_authentication(
credential=credential,
expected_challenge=str(state.get("challenge") or ""),
expected_rp_id=str(state.get("rp_id") or ""),
expected_origins=list(state.get("expected_origins") or []),
credential_public_key=str(passkey.get("public_key") or ""),
credential_current_sign_count=int(passkey.get("sign_count") or 0),
)
verified_credential_id = encode_credential_id(verified.credential_id)
if verified_credential_id != str(passkey.get("credential_id") or ""):
raise ValueError("credential_id mismatch")
except Exception as e:
logger.warning(f"[passkey] 用户登录验签失败(mode={mode}, username={expected_username or request_username or '-'}) : {e}")
record_login_failure(client_ip, username_key)
return jsonify({"error": "Passkey验证失败"}), 401
user_id = int(passkey.get("owner_id") or 0)
user = database.get_user_by_id(user_id)
if not user or user.get("status") != "approved":
return jsonify({"error": "账号不可用"}), 401
database.update_passkey_usage(int(passkey["id"]), int(verified.new_sign_count))
clear_login_failures(client_ip, username_key)
user_login_key = f"passkey:{str(user.get('username') or '').strip()}"
if user_login_key and user_login_key != username_key:
clear_login_failures(client_ip, user_login_key)
session.pop(_USER_PASSKEY_LOGIN_SESSION_KEY, None)
user_obj = User(user_id)
login_user(user_obj)
load_user_accounts(user_id)
resolved_username = str(user.get("username") or "").strip() or username or f"user-{user_id}"
_send_login_security_alert_if_needed(user=user, username=resolved_username, client_ip=client_ip)
return jsonify({"success": True, "credential_id": parsed_credential.id, "username": resolved_username})
@api_auth_bp.route("/api/login", methods=["POST"]) @api_auth_bp.route("/api/login", methods=["POST"])
@require_ip_not_locked @require_ip_not_locked
def login(): def login():
"""用户登录""" """用户登录"""
data = _get_json_payload() data = request.json or {}
username = data.get("username", "").strip() username = data.get("username", "").strip()
password = data.get("password", "").strip() password = data.get("password", "").strip()
captcha_session = data.get("captcha_session", "") captcha_session = data.get("captcha_session", "")
@@ -750,15 +452,13 @@ def login():
return jsonify({"error": error_msg, "need_captcha": True}), 429 return jsonify({"error": error_msg, "need_captcha": True}), 429
captcha_required = check_login_captcha_required(client_ip, username_key) or scan_locked or bool(need_captcha) captcha_required = check_login_captcha_required(client_ip, username_key) or scan_locked or bool(need_captcha)
captcha_ok, captcha_error_response = _verify_login_captcha_if_needed( if captcha_required:
captcha_required=captcha_required, if not captcha_session or not captcha_code:
captcha_session=captcha_session, return jsonify({"error": "请填写验证码", "need_captcha": True}), 400
captcha_code=captcha_code, success, message = safe_verify_and_consume_captcha(captcha_session, captcha_code)
client_ip=client_ip, if not success:
username_key=username_key, record_login_failure(client_ip, username_key)
) return jsonify({"error": message, "need_captcha": True}), 400
if not captcha_ok:
return captcha_error_response
user = database.verify_user(username, password) user = database.verify_user(username, password)
if not user: if not user:
@@ -776,7 +476,29 @@ def login():
login_user(user_obj) login_user(user_obj)
load_user_accounts(user["id"]) load_user_accounts(user["id"])
_send_login_security_alert_if_needed(user=user, username=username, client_ip=client_ip) try:
user_agent = request.headers.get("User-Agent", "")
context = database.record_login_context(user["id"], client_ip, user_agent)
if context and (context.get("new_ip") or context.get("new_device")):
if (
config.LOGIN_ALERT_ENABLED
and should_send_login_alert(user["id"], client_ip)
and email_service.get_email_settings().get("login_alert_enabled", True)
):
user_info = database.get_user_by_id(user["id"]) or {}
if user_info.get("email") and user_info.get("email_verified"):
if database.get_user_email_notify(user["id"]):
email_service.send_security_alert_email(
email=user_info.get("email"),
username=user_info.get("username") or username,
ip_address=client_ip,
user_agent=user_agent,
new_ip=context.get("new_ip", False),
new_device=context.get("new_device", False),
user_id=user["id"],
)
except Exception:
pass
return jsonify({"success": True}) return jsonify({"success": True})
@@ -785,7 +507,4 @@ def login():
def logout(): def logout():
"""用户登出""" """用户登出"""
logout_user() logout_user()
session.pop("admin_id", None)
session.pop("admin_username", None)
session.pop("admin_reauth_until", None)
return jsonify({"success": True}) return jsonify({"success": True})

View File

@@ -2,14 +2,9 @@
# -*- coding: utf-8 -*- # -*- coding: utf-8 -*-
from __future__ import annotations from __future__ import annotations
import json
import re import re
import threading
import time as time_mod
import uuid
import database import database
from app_logger import get_logger
from flask import Blueprint, jsonify, request from flask import Blueprint, jsonify, request
from flask_login import current_user, login_required from flask_login import current_user, login_required
from services.accounts_service import load_user_accounts from services.accounts_service import load_user_accounts
@@ -18,18 +13,10 @@ from services.state import safe_get_account, safe_get_user_accounts_snapshot
from services.tasks import submit_account_task from services.tasks import submit_account_task
api_schedules_bp = Blueprint("api_schedules", __name__) api_schedules_bp = Blueprint("api_schedules", __name__)
logger = get_logger("app")
_HHMM_RE = re.compile(r"^(\d{1,2}):(\d{2})$") _HHMM_RE = re.compile(r"^(\d{1,2}):(\d{2})$")
def _request_json(default=None):
if default is None:
default = {}
data = request.get_json(silent=True)
return data if isinstance(data, dict) else default
def _normalize_hhmm(value: object) -> str | None: def _normalize_hhmm(value: object) -> str | None:
match = _HHMM_RE.match(str(value or "").strip()) match = _HHMM_RE.match(str(value or "").strip())
if not match: if not match:
@@ -41,81 +28,18 @@ def _normalize_hhmm(value: object) -> str | None:
return f"{hour:02d}:{minute:02d}" return f"{hour:02d}:{minute:02d}"
def _normalize_random_delay(value) -> tuple[int | None, str | None]:
try:
normalized = int(value or 0)
except Exception:
return None, "random_delay必须是0或1"
if normalized not in (0, 1):
return None, "random_delay必须是0或1"
return normalized, None
def _parse_schedule_account_ids(raw_value) -> list:
try:
parsed = json.loads(raw_value or "[]")
except (json.JSONDecodeError, TypeError):
return []
return parsed if isinstance(parsed, list) else []
def _get_owned_schedule_or_error(schedule_id: int):
schedule = database.get_schedule_by_id(schedule_id)
if not schedule:
return None, (jsonify({"error": "定时任务不存在"}), 404)
if schedule.get("user_id") != current_user.id:
return None, (jsonify({"error": "无权访问"}), 403)
return schedule, None
def _ensure_user_accounts_loaded(user_id: int) -> None:
if safe_get_user_accounts_snapshot(user_id):
return
load_user_accounts(user_id)
def _parse_browse_type_or_error(raw_value, *, default=BROWSE_TYPE_SHOULD_READ):
browse_type = validate_browse_type(raw_value, default=default)
if not browse_type:
return None, (jsonify({"error": "浏览类型无效"}), 400)
return browse_type, None
def _parse_optional_pagination(default_limit: int = 20, *, max_limit: int = 200) -> tuple[int | None, int | None, bool]:
limit_raw = request.args.get("limit")
offset_raw = request.args.get("offset")
if (limit_raw is None) and (offset_raw is None):
return None, None, False
try:
limit = int(limit_raw if limit_raw is not None else default_limit)
except (ValueError, TypeError):
limit = default_limit
limit = max(1, min(limit, max_limit))
try:
offset = int(offset_raw if offset_raw is not None else 0)
except (ValueError, TypeError):
offset = 0
offset = max(0, offset)
return limit, offset, True
@api_schedules_bp.route("/api/schedules", methods=["GET"]) @api_schedules_bp.route("/api/schedules", methods=["GET"])
@login_required @login_required
def get_user_schedules_api(): def get_user_schedules_api():
"""获取当前用户的所有定时任务""" """获取当前用户的所有定时任务"""
schedules = database.get_user_schedules(current_user.id) schedules = database.get_user_schedules(current_user.id)
for schedule in schedules: import json
schedule["account_ids"] = _parse_schedule_account_ids(schedule.get("account_ids"))
limit, offset, paged = _parse_optional_pagination(default_limit=12, max_limit=100)
if paged:
total = len(schedules)
items = schedules[offset : offset + limit]
return jsonify({"items": items, "total": total, "limit": limit, "offset": offset})
for s in schedules:
try:
s["account_ids"] = json.loads(s.get("account_ids", "[]") or "[]")
except (json.JSONDecodeError, TypeError):
s["account_ids"] = []
return jsonify(schedules) return jsonify(schedules)
@@ -123,26 +47,23 @@ def get_user_schedules_api():
@login_required @login_required
def create_user_schedule_api(): def create_user_schedule_api():
"""创建用户定时任务""" """创建用户定时任务"""
data = _request_json() data = request.json or {}
name = data.get("name", "我的定时任务") name = data.get("name", "我的定时任务")
schedule_time = data.get("schedule_time", "08:00") schedule_time = data.get("schedule_time", "08:00")
weekdays = data.get("weekdays", "1,2,3,4,5") weekdays = data.get("weekdays", "1,2,3,4,5")
browse_type = validate_browse_type(data.get("browse_type", BROWSE_TYPE_SHOULD_READ), default=BROWSE_TYPE_SHOULD_READ)
browse_type, browse_error = _parse_browse_type_or_error(data.get("browse_type", BROWSE_TYPE_SHOULD_READ)) if not browse_type:
if browse_error: return jsonify({"error": "浏览类型无效"}), 400
return browse_error
enable_screenshot = data.get("enable_screenshot", 1) enable_screenshot = data.get("enable_screenshot", 1)
random_delay, delay_error = _normalize_random_delay(data.get("random_delay", 0)) random_delay = int(data.get("random_delay", 0) or 0)
if delay_error:
return jsonify({"error": delay_error}), 400
account_ids = data.get("account_ids", []) account_ids = data.get("account_ids", [])
normalized_time = _normalize_hhmm(schedule_time) normalized_time = _normalize_hhmm(schedule_time)
if not normalized_time: if not normalized_time:
return jsonify({"error": "时间格式不正确,应为 HH:MM"}), 400 return jsonify({"error": "时间格式不正确,应为 HH:MM"}), 400
if random_delay not in (0, 1):
return jsonify({"error": "random_delay必须是0或1"}), 400
schedule_id = database.create_user_schedule( schedule_id = database.create_user_schedule(
user_id=current_user.id, user_id=current_user.id,
@@ -164,11 +85,18 @@ def create_user_schedule_api():
@login_required @login_required
def get_schedule_detail_api(schedule_id): def get_schedule_detail_api(schedule_id):
"""获取定时任务详情""" """获取定时任务详情"""
schedule, error_response = _get_owned_schedule_or_error(schedule_id) schedule = database.get_schedule_by_id(schedule_id)
if error_response: if not schedule:
return error_response return jsonify({"error": "定时任务不存在"}), 404
if schedule["user_id"] != current_user.id:
return jsonify({"error": "无权访问"}), 403
schedule["account_ids"] = _parse_schedule_account_ids(schedule.get("account_ids")) import json
try:
schedule["account_ids"] = json.loads(schedule.get("account_ids", "[]") or "[]")
except (json.JSONDecodeError, TypeError):
schedule["account_ids"] = []
return jsonify(schedule) return jsonify(schedule)
@@ -176,12 +104,14 @@ def get_schedule_detail_api(schedule_id):
@login_required @login_required
def update_schedule_api(schedule_id): def update_schedule_api(schedule_id):
"""更新定时任务""" """更新定时任务"""
_, error_response = _get_owned_schedule_or_error(schedule_id) schedule = database.get_schedule_by_id(schedule_id)
if error_response: if not schedule:
return error_response return jsonify({"error": "定时任务不存在"}), 404
if schedule["user_id"] != current_user.id:
return jsonify({"error": "无权访问"}), 403
data = _request_json() data = request.json or {}
allowed_fields = { allowed_fields = [
"name", "name",
"schedule_time", "schedule_time",
"weekdays", "weekdays",
@@ -190,26 +120,27 @@ def update_schedule_api(schedule_id):
"random_delay", "random_delay",
"account_ids", "account_ids",
"enabled", "enabled",
} ]
update_data = {key: value for key, value in data.items() if key in allowed_fields}
update_data = {k: v for k, v in data.items() if k in allowed_fields}
if "schedule_time" in update_data: if "schedule_time" in update_data:
normalized_time = _normalize_hhmm(update_data["schedule_time"]) normalized_time = _normalize_hhmm(update_data["schedule_time"])
if not normalized_time: if not normalized_time:
return jsonify({"error": "时间格式不正确,应为 HH:MM"}), 400 return jsonify({"error": "时间格式不正确,应为 HH:MM"}), 400
update_data["schedule_time"] = normalized_time update_data["schedule_time"] = normalized_time
if "random_delay" in update_data: if "random_delay" in update_data:
random_delay, delay_error = _normalize_random_delay(update_data.get("random_delay")) try:
if delay_error: update_data["random_delay"] = int(update_data.get("random_delay") or 0)
return jsonify({"error": delay_error}), 400 except Exception:
update_data["random_delay"] = random_delay return jsonify({"error": "random_delay必须是0或1"}), 400
if update_data["random_delay"] not in (0, 1):
return jsonify({"error": "random_delay必须是0或1"}), 400
if "browse_type" in update_data: if "browse_type" in update_data:
normalized_browse_type, browse_error = _parse_browse_type_or_error(update_data.get("browse_type")) normalized = validate_browse_type(update_data.get("browse_type"), default=BROWSE_TYPE_SHOULD_READ)
if browse_error: if not normalized:
return browse_error return jsonify({"error": "浏览类型无效"}), 400
update_data["browse_type"] = normalized_browse_type update_data["browse_type"] = normalized
success = database.update_user_schedule(schedule_id, **update_data) success = database.update_user_schedule(schedule_id, **update_data)
if success: if success:
@@ -221,9 +152,11 @@ def update_schedule_api(schedule_id):
@login_required @login_required
def delete_schedule_api(schedule_id): def delete_schedule_api(schedule_id):
"""删除定时任务""" """删除定时任务"""
_, error_response = _get_owned_schedule_or_error(schedule_id) schedule = database.get_schedule_by_id(schedule_id)
if error_response: if not schedule:
return error_response return jsonify({"error": "定时任务不存在"}), 404
if schedule["user_id"] != current_user.id:
return jsonify({"error": "无权访问"}), 403
success = database.delete_user_schedule(schedule_id) success = database.delete_user_schedule(schedule_id)
if success: if success:
@@ -235,11 +168,13 @@ def delete_schedule_api(schedule_id):
@login_required @login_required
def toggle_schedule_api(schedule_id): def toggle_schedule_api(schedule_id):
"""启用/禁用定时任务""" """启用/禁用定时任务"""
schedule, error_response = _get_owned_schedule_or_error(schedule_id) schedule = database.get_schedule_by_id(schedule_id)
if error_response: if not schedule:
return error_response return jsonify({"error": "定时任务不存在"}), 404
if schedule["user_id"] != current_user.id:
return jsonify({"error": "无权访问"}), 403
data = _request_json() data = request.json
enabled = data.get("enabled", not schedule["enabled"]) enabled = data.get("enabled", not schedule["enabled"])
success = database.toggle_user_schedule(schedule_id, enabled) success = database.toggle_user_schedule(schedule_id, enabled)
@@ -252,11 +187,22 @@ def toggle_schedule_api(schedule_id):
@login_required @login_required
def run_schedule_now_api(schedule_id): def run_schedule_now_api(schedule_id):
"""立即执行定时任务""" """立即执行定时任务"""
schedule, error_response = _get_owned_schedule_or_error(schedule_id) import json
if error_response: import threading
return error_response import time as time_mod
import uuid
schedule = database.get_schedule_by_id(schedule_id)
if not schedule:
return jsonify({"error": "定时任务不存在"}), 404
if schedule["user_id"] != current_user.id:
return jsonify({"error": "无权访问"}), 403
try:
account_ids = json.loads(schedule.get("account_ids", "[]") or "[]")
except (json.JSONDecodeError, TypeError):
account_ids = []
account_ids = _parse_schedule_account_ids(schedule.get("account_ids"))
if not account_ids: if not account_ids:
return jsonify({"error": "没有配置账号"}), 400 return jsonify({"error": "没有配置账号"}), 400
@@ -264,7 +210,8 @@ def run_schedule_now_api(schedule_id):
browse_type = normalize_browse_type(schedule.get("browse_type", BROWSE_TYPE_SHOULD_READ)) browse_type = normalize_browse_type(schedule.get("browse_type", BROWSE_TYPE_SHOULD_READ))
enable_screenshot = schedule["enable_screenshot"] enable_screenshot = schedule["enable_screenshot"]
_ensure_user_accounts_loaded(user_id) if not safe_get_user_accounts_snapshot(user_id):
load_user_accounts(user_id)
from services.state import safe_create_batch, safe_finalize_batch_after_dispatch from services.state import safe_create_batch, safe_finalize_batch_after_dispatch
from services.task_batches import _send_batch_task_email_if_configured from services.task_batches import _send_batch_task_email_if_configured
@@ -303,7 +250,6 @@ def run_schedule_now_api(schedule_id):
if remaining["done"] or remaining["count"] > 0: if remaining["done"] or remaining["count"] > 0:
return return
remaining["done"] = True remaining["done"] = True
execution_duration = int(time_mod.time() - execution_start_time) execution_duration = int(time_mod.time() - execution_start_time)
database.update_schedule_execution_log( database.update_schedule_execution_log(
log_id, log_id,
@@ -314,17 +260,19 @@ def run_schedule_now_api(schedule_id):
status="completed", status="completed",
) )
task_source = f"user_scheduled:{batch_id}"
for account_id in account_ids: for account_id in account_ids:
account = safe_get_account(user_id, account_id) account = safe_get_account(user_id, account_id)
if (not account) or account.is_running: if not account:
skipped_count += 1
continue
if account.is_running:
skipped_count += 1 skipped_count += 1
continue continue
task_source = f"user_scheduled:{batch_id}"
with completion_lock: with completion_lock:
remaining["count"] += 1 remaining["count"] += 1
ok, msg = submit_account_task(
ok, _ = submit_account_task(
user_id=user_id, user_id=user_id,
account_id=account_id, account_id=account_id,
browse_type=browse_type, browse_type=browse_type,
@@ -393,5 +341,4 @@ def delete_schedule_logs_api(schedule_id):
deleted = database.delete_schedule_logs(schedule_id, current_user.id) deleted = database.delete_schedule_logs(schedule_id, current_user.id)
return jsonify({"success": True, "deleted": deleted}) return jsonify({"success": True, "deleted": deleted})
except Exception as e: except Exception as e:
logger.warning(f"[schedules] 清空定时任务日志失败(schedule_id={schedule_id}): {e}") return jsonify({"error": str(e)}), 500
return jsonify({"error": "清空日志失败,请稍后重试"}), 500

View File

@@ -4,178 +4,19 @@ from __future__ import annotations
import os import os
from datetime import datetime from datetime import datetime
from typing import Iterator
import database import database
from app_config import get_config from app_config import get_config
from app_logger import get_logger
from app_security import is_safe_path from app_security import is_safe_path
from flask import Blueprint, jsonify, request, send_from_directory from flask import Blueprint, jsonify, send_from_directory
from flask_login import current_user, login_required from flask_login import current_user, login_required
from PIL import Image, ImageOps
from services.client_log import log_to_client from services.client_log import log_to_client
from services.time_utils import BEIJING_TZ from services.time_utils import BEIJING_TZ
config = get_config() config = get_config()
SCREENSHOTS_DIR = config.SCREENSHOTS_DIR SCREENSHOTS_DIR = config.SCREENSHOTS_DIR
_IMAGE_EXTENSIONS = (".png", ".jpg", ".jpeg")
_THUMBNAIL_DIR = os.path.join(SCREENSHOTS_DIR, ".thumbs")
_THUMBNAIL_MAX_SIZE = (480, 270)
_THUMBNAIL_QUALITY = 80
try:
_RESAMPLE_FILTER = Image.Resampling.LANCZOS
except AttributeError: # Pillow<9 fallback
_RESAMPLE_FILTER = Image.LANCZOS
api_screenshots_bp = Blueprint("api_screenshots", __name__) api_screenshots_bp = Blueprint("api_screenshots", __name__)
logger = get_logger("app")
def _get_user_prefix(user_id: int) -> str:
return f"u{int(user_id)}"
def _get_username(user_id: int) -> str:
user_info = database.get_user_by_id(user_id)
return str(user_info.get("username") or "") if user_info else ""
def _list_all_usernames() -> list[str]:
users = database.get_all_users()
result = []
for row in users:
username = str(row.get("username") or "").strip()
if username:
result.append(username)
return result
def _resolve_user_owned_prefix(
filename: str,
*,
user_id: int,
username: str,
all_usernames: list[str] | None = None,
) -> str | None:
lower_name = filename.lower()
if not lower_name.endswith(_IMAGE_EXTENSIONS):
return None
# 新版命名u{user_id}_...
id_prefix = _get_user_prefix(user_id)
if filename.startswith(id_prefix + "_"):
return id_prefix
# 兼容旧版命名:{username}_...
username = str(username or "").strip()
if not username:
return None
if all_usernames is None:
all_usernames = _list_all_usernames()
matched_usernames = [item for item in all_usernames if filename.startswith(item + "_")]
if not matched_usernames:
return None
# 取“最长匹配用户名”,避免 foo 越权读取 foo_bar 的截图。
max_len = max(len(item) for item in matched_usernames)
winners = [item for item in matched_usernames if len(item) == max_len]
if len(winners) != 1:
return None
if winners[0] != username:
return None
return winners[0]
def _iter_user_screenshot_entries(user_id: int, username: str, all_usernames: list[str]) -> Iterator[tuple[os.DirEntry, str]]:
if not os.path.exists(SCREENSHOTS_DIR):
return
with os.scandir(SCREENSHOTS_DIR) as entries:
for entry in entries:
if not entry.is_file():
continue
matched_prefix = _resolve_user_owned_prefix(
entry.name,
user_id=user_id,
username=username,
all_usernames=all_usernames,
)
if not matched_prefix:
continue
yield entry, matched_prefix
def _build_display_name(filename: str, owner_prefix: str) -> str:
prefix = f"{owner_prefix}_"
if filename.startswith(prefix):
return filename[len(prefix) :]
return filename
def _thumbnail_name(filename: str) -> str:
stem, _ = os.path.splitext(filename)
return f"{stem}.thumb.jpg"
def _thumbnail_path(filename: str) -> str:
return os.path.join(_THUMBNAIL_DIR, _thumbnail_name(filename))
def _ensure_thumbnail(source_path: str, thumb_path: str) -> bool:
if not os.path.exists(source_path):
return False
source_mtime = os.path.getmtime(source_path)
if os.path.exists(thumb_path) and os.path.getmtime(thumb_path) >= source_mtime:
return True
os.makedirs(_THUMBNAIL_DIR, exist_ok=True)
with Image.open(source_path) as image:
image = ImageOps.exif_transpose(image)
if image.mode != "RGB":
image = image.convert("RGB")
image.thumbnail(_THUMBNAIL_MAX_SIZE, _RESAMPLE_FILTER)
image.save(
thumb_path,
format="JPEG",
quality=_THUMBNAIL_QUALITY,
optimize=True,
progressive=True,
)
os.utime(thumb_path, (source_mtime, source_mtime))
return True
def _remove_thumbnail(filename: str) -> None:
thumb_path = _thumbnail_path(filename)
if os.path.exists(thumb_path):
os.remove(thumb_path)
def _parse_optional_pagination(default_limit: int = 24, *, max_limit: int = 100) -> tuple[int | None, int | None, bool]:
limit_raw = request.args.get("limit")
offset_raw = request.args.get("offset")
if (limit_raw is None) and (offset_raw is None):
return None, None, False
try:
limit = int(limit_raw if limit_raw is not None else default_limit)
except (ValueError, TypeError):
limit = default_limit
limit = max(1, min(limit, max_limit))
try:
offset = int(offset_raw if offset_raw is not None else 0)
except (ValueError, TypeError):
offset = 0
offset = max(0, offset)
return limit, offset, True
@api_screenshots_bp.route("/api/screenshots", methods=["GET"]) @api_screenshots_bp.route("/api/screenshots", methods=["GET"])
@@ -183,49 +24,46 @@ def _parse_optional_pagination(default_limit: int = 24, *, max_limit: int = 100)
def get_screenshots(): def get_screenshots():
"""获取当前用户的截图列表""" """获取当前用户的截图列表"""
user_id = current_user.id user_id = current_user.id
username = _get_username(user_id) user_info = database.get_user_by_id(user_id)
username_prefix = user_info["username"] if user_info else f"user{user_id}"
try: try:
screenshots = [] screenshots = []
all_usernames = _list_all_usernames() if os.path.exists(SCREENSHOTS_DIR):
for entry, matched_prefix in _iter_user_screenshot_entries(user_id, username, all_usernames): for filename in os.listdir(SCREENSHOTS_DIR):
filename = entry.name if filename.lower().endswith((".png", ".jpg", ".jpeg")) and filename.startswith(username_prefix + "_"):
stat = entry.stat() filepath = os.path.join(SCREENSHOTS_DIR, filename)
created_time = datetime.fromtimestamp(stat.st_mtime, tz=BEIJING_TZ) stat = os.stat(filepath)
created_time = datetime.fromtimestamp(stat.st_mtime, tz=BEIJING_TZ)
screenshots.append( parts = filename.rsplit(".", 1)[0].split("_", 1)
{ if len(parts) > 1:
"filename": filename, display_name = parts[1] + "." + filename.rsplit(".", 1)[1]
"display_name": _build_display_name(filename, matched_prefix), else:
"size": stat.st_size, display_name = filename
"created": created_time.strftime("%Y-%m-%d %H:%M:%S"),
"_created_ts": stat.st_mtime,
}
)
screenshots.sort(key=lambda item: item.get("_created_ts", 0), reverse=True)
for item in screenshots:
item.pop("_created_ts", None)
limit, offset, paged = _parse_optional_pagination(default_limit=24, max_limit=100)
if paged:
total = len(screenshots)
items = screenshots[offset : offset + limit]
return jsonify({"items": items, "total": total, "limit": limit, "offset": offset})
screenshots.append(
{
"filename": filename,
"display_name": display_name,
"size": stat.st_size,
"created": created_time.strftime("%Y-%m-%d %H:%M:%S"),
}
)
screenshots.sort(key=lambda x: x["created"], reverse=True)
return jsonify(screenshots) return jsonify(screenshots)
except Exception as e: except Exception as e:
logger.warning(f"[screenshots] 获取截图列表失败(user_id={user_id}): {e}") return jsonify({"error": str(e)}), 500
return jsonify({"error": "获取截图列表失败"}), 500
@api_screenshots_bp.route("/screenshots/<filename>") @api_screenshots_bp.route("/screenshots/<filename>")
@login_required @login_required
def serve_screenshot(filename): def serve_screenshot(filename):
"""提供图文件访问""" """提供图文件访问"""
user_id = current_user.id user_id = current_user.id
username = _get_username(user_id) user_info = database.get_user_by_id(user_id)
if not _resolve_user_owned_prefix(filename, user_id=user_id, username=username): username_prefix = user_info["username"] if user_info else f"user{user_id}"
if not filename.startswith(username_prefix + "_"):
return jsonify({"error": "无权访问"}), 403 return jsonify({"error": "无权访问"}), 403
if not is_safe_path(SCREENSHOTS_DIR, filename): if not is_safe_path(SCREENSHOTS_DIR, filename):
@@ -234,56 +72,26 @@ def serve_screenshot(filename):
return send_from_directory(SCREENSHOTS_DIR, filename) return send_from_directory(SCREENSHOTS_DIR, filename)
@api_screenshots_bp.route("/screenshots/thumb/<filename>")
@login_required
def serve_screenshot_thumbnail(filename):
"""提供缩略图访问(失败时自动回退原图)"""
user_id = current_user.id
username = _get_username(user_id)
if not _resolve_user_owned_prefix(filename, user_id=user_id, username=username):
return jsonify({"error": "无权访问"}), 403
if not is_safe_path(SCREENSHOTS_DIR, filename):
return jsonify({"error": "非法路径"}), 403
source_path = os.path.join(SCREENSHOTS_DIR, filename)
if not os.path.exists(source_path):
return jsonify({"error": "文件不存在"}), 404
thumb_path = _thumbnail_path(filename)
try:
if _ensure_thumbnail(source_path, thumb_path) and os.path.exists(thumb_path):
return send_from_directory(_THUMBNAIL_DIR, os.path.basename(thumb_path), max_age=86400, conditional=True)
except Exception:
pass
return send_from_directory(SCREENSHOTS_DIR, filename, max_age=3600, conditional=True)
@api_screenshots_bp.route("/api/screenshots/<filename>", methods=["DELETE"]) @api_screenshots_bp.route("/api/screenshots/<filename>", methods=["DELETE"])
@login_required @login_required
def delete_screenshot(filename): def delete_screenshot(filename):
"""删除指定截图""" """删除指定截图"""
user_id = current_user.id user_id = current_user.id
username = _get_username(user_id) user_info = database.get_user_by_id(user_id)
if not _resolve_user_owned_prefix(filename, user_id=user_id, username=username): username_prefix = user_info["username"] if user_info else f"user{user_id}"
return jsonify({"error": "无权删除"}), 403
if not is_safe_path(SCREENSHOTS_DIR, filename): if not filename.startswith(username_prefix + "_"):
return jsonify({"error": "非法路径"}), 403 return jsonify({"error": "无权删除"}), 403
try: try:
filepath = os.path.join(SCREENSHOTS_DIR, filename) filepath = os.path.join(SCREENSHOTS_DIR, filename)
if os.path.exists(filepath): if os.path.exists(filepath):
os.remove(filepath) os.remove(filepath)
_remove_thumbnail(filename)
log_to_client(f"删除截图: {filename}", user_id) log_to_client(f"删除截图: {filename}", user_id)
return jsonify({"success": True}) return jsonify({"success": True})
return jsonify({"error": "文件不存在"}), 404 return jsonify({"error": "文件不存在"}), 404
except Exception as e: except Exception as e:
logger.warning(f"[screenshots] 删除截图失败(user_id={user_id}, filename={filename}): {e}") return jsonify({"error": str(e)}), 500
return jsonify({"error": "删除截图失败"}), 500
@api_screenshots_bp.route("/api/screenshots/clear", methods=["POST"]) @api_screenshots_bp.route("/api/screenshots/clear", methods=["POST"])
@@ -291,18 +99,19 @@ def delete_screenshot(filename):
def clear_all_screenshots(): def clear_all_screenshots():
"""清空当前用户的所有截图""" """清空当前用户的所有截图"""
user_id = current_user.id user_id = current_user.id
username = _get_username(user_id) user_info = database.get_user_by_id(user_id)
username_prefix = user_info["username"] if user_info else f"user{user_id}"
try: try:
deleted_count = 0 deleted_count = 0
all_usernames = _list_all_usernames() if os.path.exists(SCREENSHOTS_DIR):
for entry, _ in _iter_user_screenshot_entries(user_id, username, all_usernames): for filename in os.listdir(SCREENSHOTS_DIR):
os.remove(entry.path) if filename.lower().endswith((".png", ".jpg", ".jpeg")) and filename.startswith(username_prefix + "_"):
_remove_thumbnail(entry.name) filepath = os.path.join(SCREENSHOTS_DIR, filename)
deleted_count += 1 os.remove(filepath)
deleted_count += 1
log_to_client(f"清理了 {deleted_count} 个截图文件", user_id) log_to_client(f"清理了 {deleted_count} 个截图文件", user_id)
return jsonify({"success": True, "deleted": deleted_count}) return jsonify({"success": True, "deleted": deleted_count})
except Exception as e: except Exception as e:
logger.warning(f"[screenshots] 清空截图失败(user_id={user_id}): {e}") return jsonify({"error": str(e)}), 500
return jsonify({"error": "清空截图失败"}), 500

View File

@@ -2,137 +2,18 @@
# -*- coding: utf-8 -*- # -*- coding: utf-8 -*-
from __future__ import annotations from __future__ import annotations
import json
import time
import database import database
import email_service import email_service
from app_logger import get_logger from app_logger import get_logger
from app_security import get_rate_limit_ip, require_ip_not_locked, validate_email, validate_password from app_security import get_rate_limit_ip, require_ip_not_locked, validate_email, validate_password
from flask import Blueprint, jsonify, request, session from flask import Blueprint, jsonify, request
from flask_login import current_user, login_required from flask_login import current_user, login_required
from routes.pages import render_app_spa_or_legacy from routes.pages import render_app_spa_or_legacy
from services.passkeys import (
MAX_PASSKEYS_PER_OWNER,
encode_credential_id,
get_credential_transports,
get_expected_origins,
get_rp_id,
is_challenge_valid,
make_registration_options,
normalize_device_name,
verify_registration,
)
from services.state import check_email_rate_limit, check_ip_request_rate, safe_iter_task_status_items from services.state import check_email_rate_limit, check_ip_request_rate, safe_iter_task_status_items
from services.tasks import get_task_scheduler
logger = get_logger("app") logger = get_logger("app")
api_user_bp = Blueprint("api_user", __name__) api_user_bp = Blueprint("api_user", __name__)
_USER_PASSKEY_REGISTER_SESSION_KEY = "user_passkey_register_state"
def _get_current_user_record():
return database.get_user_by_id(current_user.id)
def _get_current_user_or_404():
user = _get_current_user_record()
if user:
return user, None
return None, (jsonify({"error": "用户不存在"}), 404)
def _get_current_username(*, fallback: str) -> str:
user = _get_current_user_record()
username = (user or {}).get("username", "")
return username if username else fallback
def _coerce_binary_flag(value, *, field_label: str):
if isinstance(value, bool):
value = 1 if value else 0
try:
value = int(value)
except Exception:
return None, f"{field_label}必须是0或1"
if value not in (0, 1):
return None, f"{field_label}必须是0或1"
return value, None
def _parse_credential_payload(data: dict) -> dict | None:
credential = data.get("credential")
if isinstance(credential, dict):
return credential
if isinstance(credential, str):
try:
parsed = json.loads(credential)
return parsed if isinstance(parsed, dict) else None
except Exception:
return None
return None
def _truncate_text(value, max_len: int = 300) -> str:
text = str(value or "").strip()
if len(text) > max_len:
return f"{text[:max_len]}..."
return text
def _check_bind_email_rate_limits(email: str):
client_ip = get_rate_limit_ip()
allowed, error_msg = check_ip_request_rate(client_ip, "email")
if not allowed:
return False, error_msg, 429
allowed, error_msg = check_email_rate_limit(email, "bind_email")
if not allowed:
return False, error_msg, 429
return True, "", 200
def _render_verify_bind_failed(*, title: str, error_message: str):
spa_initial_state = {
"page": "verify_result",
"success": False,
"title": title,
"error_message": error_message,
"primary_label": "返回登录",
"primary_url": "/login",
}
return render_app_spa_or_legacy(
"verify_failed.html",
legacy_context={"error_message": error_message},
spa_initial_state=spa_initial_state,
)
def _render_verify_bind_success(email: str):
spa_initial_state = {
"page": "verify_result",
"success": True,
"title": "邮箱绑定成功",
"message": f"邮箱 {email} 已成功绑定到您的账号!",
"primary_label": "返回登录",
"primary_url": "/login",
"redirect_url": "/login",
"redirect_seconds": 5,
}
return render_app_spa_or_legacy("verify_success.html", spa_initial_state=spa_initial_state)
def _get_current_running_count(user_id: int) -> int:
try:
queue_snapshot = get_task_scheduler().get_queue_state_snapshot() or {}
running_by_user = queue_snapshot.get("running_by_user") or {}
return int(running_by_user.get(int(user_id), running_by_user.get(str(user_id), 0)) or 0)
except Exception:
current_running = 0
for _, info in safe_iter_task_status_items():
if info.get("user_id") == user_id and info.get("status") == "运行中":
current_running += 1
return current_running
@api_user_bp.route("/api/announcements/active", methods=["GET"]) @api_user_bp.route("/api/announcements/active", methods=["GET"])
@@ -196,7 +77,8 @@ def submit_feedback():
if len(description) > 2000: if len(description) > 2000:
return jsonify({"error": "描述不能超过2000个字符"}), 400 return jsonify({"error": "描述不能超过2000个字符"}), 400
username = _get_current_username(fallback=f"用户{current_user.id}") user_info = database.get_user_by_id(current_user.id)
username = user_info["username"] if user_info else f"用户{current_user.id}"
feedback_id = database.create_bug_feedback( feedback_id = database.create_bug_feedback(
user_id=current_user.id, user_id=current_user.id,
@@ -222,7 +104,8 @@ def get_my_feedbacks():
def get_current_user_vip(): def get_current_user_vip():
"""获取当前用户VIP信息""" """获取当前用户VIP信息"""
vip_info = database.get_user_vip_info(current_user.id) vip_info = database.get_user_vip_info(current_user.id)
vip_info["username"] = _get_current_username(fallback="Unknown") user_info = database.get_user_by_id(current_user.id)
vip_info["username"] = user_info["username"] if user_info else "Unknown"
return jsonify(vip_info) return jsonify(vip_info)
@@ -241,9 +124,9 @@ def change_user_password():
if not is_valid: if not is_valid:
return jsonify({"error": error_msg}), 400 return jsonify({"error": error_msg}), 400
user, error_response = _get_current_user_or_404() user = database.get_user_by_id(current_user.id)
if error_response: if not user:
return error_response return jsonify({"error": "用户不存在"}), 404
username = user.get("username", "") username = user.get("username", "")
if not username or not database.verify_user(username, current_password): if not username or not database.verify_user(username, current_password):
@@ -258,9 +141,9 @@ def change_user_password():
@login_required @login_required
def get_user_email(): def get_user_email():
"""获取当前用户的邮箱信息""" """获取当前用户的邮箱信息"""
user, error_response = _get_current_user_or_404() user = database.get_user_by_id(current_user.id)
if error_response: if not user:
return error_response return jsonify({"error": "用户不存在"}), 404
return jsonify({"email": user.get("email", ""), "email_verified": user.get("email_verified", False)}) return jsonify({"email": user.get("email", ""), "email_verified": user.get("email_verified", False)})
@@ -269,12 +152,10 @@ def get_user_email():
@login_required @login_required
def get_user_kdocs_settings(): def get_user_kdocs_settings():
"""获取当前用户的金山文档设置""" """获取当前用户的金山文档设置"""
settings = database.get_user_kdocs_settings(current_user.id) or {} settings = database.get_user_kdocs_settings(current_user.id)
cfg = database.get_system_config() or {} if not settings:
default_unit = (cfg.get("kdocs_default_unit") or "").strip() or "道县" return jsonify({"kdocs_unit": "", "kdocs_auto_upload": 0})
kdocs_unit = (settings.get("kdocs_unit") or "").strip() or default_unit return jsonify(settings)
kdocs_auto_upload = 1 if int(settings.get("kdocs_auto_upload", 0) or 0) == 1 else 0
return jsonify({"kdocs_unit": kdocs_unit, "kdocs_auto_upload": kdocs_auto_upload})
@api_user_bp.route("/api/user/kdocs", methods=["POST"]) @api_user_bp.route("/api/user/kdocs", methods=["POST"])
@@ -291,9 +172,14 @@ def update_user_kdocs_settings():
return jsonify({"error": "县区长度不能超过50"}), 400 return jsonify({"error": "县区长度不能超过50"}), 400
if kdocs_auto_upload is not None: if kdocs_auto_upload is not None:
kdocs_auto_upload, parse_error = _coerce_binary_flag(kdocs_auto_upload, field_label="自动上传开关") if isinstance(kdocs_auto_upload, bool):
if parse_error: kdocs_auto_upload = 1 if kdocs_auto_upload else 0
return jsonify({"error": parse_error}), 400 try:
kdocs_auto_upload = int(kdocs_auto_upload)
except Exception:
return jsonify({"error": "自动上传开关必须是0或1"}), 400
if kdocs_auto_upload not in (0, 1):
return jsonify({"error": "自动上传开关必须是0或1"}), 400
if not database.update_user_kdocs_settings( if not database.update_user_kdocs_settings(
current_user.id, current_user.id,
@@ -302,14 +188,8 @@ def update_user_kdocs_settings():
): ):
return jsonify({"error": "更新失败"}), 400 return jsonify({"error": "更新失败"}), 400
settings = database.get_user_kdocs_settings(current_user.id) or {} settings = database.get_user_kdocs_settings(current_user.id) or {"kdocs_unit": "", "kdocs_auto_upload": 0}
cfg = database.get_system_config() or {} return jsonify({"success": True, "settings": settings})
default_unit = (cfg.get("kdocs_default_unit") or "").strip() or "道县"
response_settings = {
"kdocs_unit": (settings.get("kdocs_unit") or "").strip() or default_unit,
"kdocs_auto_upload": 1 if int(settings.get("kdocs_auto_upload", 0) or 0) == 1 else 0,
}
return jsonify({"success": True, "settings": response_settings})
@api_user_bp.route("/api/user/bind-email", methods=["POST"]) @api_user_bp.route("/api/user/bind-email", methods=["POST"])
@@ -327,9 +207,13 @@ def bind_user_email():
if not is_valid: if not is_valid:
return jsonify({"error": error_msg}), 400 return jsonify({"error": error_msg}), 400
allowed, error_msg, status_code = _check_bind_email_rate_limits(email) client_ip = get_rate_limit_ip()
allowed, error_msg = check_ip_request_rate(client_ip, "email")
if not allowed: if not allowed:
return jsonify({"error": error_msg}), status_code return jsonify({"error": error_msg}), 429
allowed, error_msg = check_email_rate_limit(email, "bind_email")
if not allowed:
return jsonify({"error": error_msg}), 429
settings = email_service.get_email_settings() settings = email_service.get_email_settings()
if not settings.get("enabled", False): if not settings.get("enabled", False):
@@ -339,9 +223,9 @@ def bind_user_email():
if existing_user and existing_user["id"] != current_user.id: if existing_user and existing_user["id"] != current_user.id:
return jsonify({"error": "该邮箱已被其他用户绑定"}), 400 return jsonify({"error": "该邮箱已被其他用户绑定"}), 400
user, error_response = _get_current_user_or_404() user = database.get_user_by_id(current_user.id)
if error_response: if not user:
return error_response return jsonify({"error": "用户不存在"}), 404
if user.get("email") == email and user.get("email_verified"): if user.get("email") == email and user.get("email_verified"):
return jsonify({"error": "该邮箱已绑定并验证"}), 400 return jsonify({"error": "该邮箱已绑定并验证"}), 400
@@ -356,30 +240,63 @@ def bind_user_email():
@api_user_bp.route("/api/verify-bind-email/<token>") @api_user_bp.route("/api/verify-bind-email/<token>")
def verify_bind_email(token): def verify_bind_email(token):
"""验证邮箱绑定Token""" """验证邮箱绑定Token"""
result = email_service.verify_bind_email_token(token, consume=False) result = email_service.verify_bind_email_token(token)
if result: if result:
token_id = result["token_id"]
user_id = result["user_id"] user_id = result["user_id"]
email = result["email"] email = result["email"]
if database.update_user_email(user_id, email, verified=True): if database.update_user_email(user_id, email, verified=True):
if not email_service.consume_email_token(token_id): spa_initial_state = {
logger.warning(f"邮箱绑定成功但Token消费失败: user_id={user_id}") "page": "verify_result",
return _render_verify_bind_success(email) "success": True,
"title": "邮箱绑定成功",
"message": f"邮箱 {email} 已成功绑定到您的账号!",
"primary_label": "返回登录",
"primary_url": "/login",
"redirect_url": "/login",
"redirect_seconds": 5,
}
return render_app_spa_or_legacy("verify_success.html", spa_initial_state=spa_initial_state)
return _render_verify_bind_failed(title="绑定失败", error_message="邮箱绑定失败,请重试") error_message = "邮箱绑定失败,请重试"
spa_initial_state = {
"page": "verify_result",
"success": False,
"title": "绑定失败",
"error_message": error_message,
"primary_label": "返回登录",
"primary_url": "/login",
}
return render_app_spa_or_legacy(
"verify_failed.html",
legacy_context={"error_message": error_message},
spa_initial_state=spa_initial_state,
)
return _render_verify_bind_failed(title="链接无效", error_message="验证链接已过期或无效,请重新发送验证邮件") error_message = "验证链接已过期或无效,请重新发送验证邮件"
spa_initial_state = {
"page": "verify_result",
"success": False,
"title": "链接无效",
"error_message": error_message,
"primary_label": "返回登录",
"primary_url": "/login",
}
return render_app_spa_or_legacy(
"verify_failed.html",
legacy_context={"error_message": error_message},
spa_initial_state=spa_initial_state,
)
@api_user_bp.route("/api/user/unbind-email", methods=["POST"]) @api_user_bp.route("/api/user/unbind-email", methods=["POST"])
@login_required @login_required
def unbind_user_email(): def unbind_user_email():
"""解绑用户邮箱""" """解绑用户邮箱"""
user, error_response = _get_current_user_or_404() user = database.get_user_by_id(current_user.id)
if error_response: if not user:
return error_response return jsonify({"error": "用户不存在"}), 404
if not user.get("email"): if not user.get("email"):
return jsonify({"error": "当前未绑定邮箱"}), 400 return jsonify({"error": "当前未绑定邮箱"}), 400
@@ -409,176 +326,6 @@ def update_user_email_notify():
return jsonify({"error": "更新失败"}), 500 return jsonify({"error": "更新失败"}), 500
@api_user_bp.route("/api/user/passkeys", methods=["GET"])
@login_required
def list_user_passkeys():
"""获取当前用户绑定的 Passkey 设备列表。"""
rows = database.list_passkeys("user", int(current_user.id))
items = []
for row in rows:
credential_id = str(row.get("credential_id") or "")
preview = ""
if credential_id:
preview = f"{credential_id[:8]}...{credential_id[-6:]}" if len(credential_id) > 16 else credential_id
items.append(
{
"id": int(row.get("id")),
"device_name": str(row.get("device_name") or ""),
"credential_id_preview": preview,
"created_at": row.get("created_at"),
"last_used_at": row.get("last_used_at"),
"transports": str(row.get("transports") or ""),
}
)
return jsonify({"items": items, "limit": MAX_PASSKEYS_PER_OWNER})
@api_user_bp.route("/api/user/passkeys/register/options", methods=["POST"])
@login_required
def user_passkey_register_options():
"""当前登录用户创建 Passkey下发 registration challenge。"""
user, error_response = _get_current_user_or_404()
if error_response:
return error_response
count = database.count_passkeys("user", int(current_user.id))
if count >= MAX_PASSKEYS_PER_OWNER:
return jsonify({"error": f"最多可绑定{MAX_PASSKEYS_PER_OWNER}台设备"}), 400
data = request.get_json(silent=True) or {}
device_name = normalize_device_name(data.get("device_name"))
existing = database.list_passkeys("user", int(current_user.id))
exclude_credential_ids = [str(item.get("credential_id") or "").strip() for item in existing if item.get("credential_id")]
try:
rp_id = get_rp_id(request)
expected_origins = get_expected_origins(request)
except Exception as e:
logger.warning(f"[passkey] 用户注册 options 失败(user_id={current_user.id}): {e}")
return jsonify({"error": "Passkey配置异常请联系管理员"}), 500
try:
options = make_registration_options(
rp_id=rp_id,
rp_name="知识管理平台",
user_name=str(user.get("username") or f"user-{current_user.id}"),
user_display_name=str(user.get("username") or f"user-{current_user.id}"),
user_id_bytes=f"user:{int(current_user.id)}".encode("utf-8"),
exclude_credential_ids=exclude_credential_ids,
)
except Exception as e:
logger.warning(f"[passkey] 用户注册 options 构建失败(user_id={current_user.id}): {e}")
return jsonify({"error": "生成Passkey挑战失败"}), 500
challenge = str(options.get("challenge") or "").strip()
if not challenge:
return jsonify({"error": "生成Passkey挑战失败"}), 500
session[_USER_PASSKEY_REGISTER_SESSION_KEY] = {
"user_id": int(current_user.id),
"challenge": challenge,
"rp_id": rp_id,
"expected_origins": expected_origins,
"device_name": device_name,
"created_at": time.time(),
}
session.modified = True
return jsonify({"publicKey": options, "limit": MAX_PASSKEYS_PER_OWNER})
@api_user_bp.route("/api/user/passkeys/register/verify", methods=["POST"])
@login_required
def user_passkey_register_verify():
"""当前登录用户创建 Passkey校验 attestation 并落库。"""
state = session.get(_USER_PASSKEY_REGISTER_SESSION_KEY) or {}
if not state:
return jsonify({"error": "Passkey挑战不存在或已过期请重试"}), 400
if int(state.get("user_id") or 0) != int(current_user.id):
return jsonify({"error": "Passkey挑战与当前用户不匹配"}), 400
if not is_challenge_valid(state.get("created_at")):
session.pop(_USER_PASSKEY_REGISTER_SESSION_KEY, None)
return jsonify({"error": "Passkey挑战已过期请重试"}), 400
data = request.get_json(silent=True) or {}
credential = _parse_credential_payload(data)
if not credential:
return jsonify({"error": "Passkey参数缺失"}), 400
count = database.count_passkeys("user", int(current_user.id))
if count >= MAX_PASSKEYS_PER_OWNER:
session.pop(_USER_PASSKEY_REGISTER_SESSION_KEY, None)
return jsonify({"error": f"最多可绑定{MAX_PASSKEYS_PER_OWNER}台设备"}), 400
try:
verified = verify_registration(
credential=credential,
expected_challenge=str(state.get("challenge") or ""),
expected_rp_id=str(state.get("rp_id") or ""),
expected_origins=list(state.get("expected_origins") or []),
)
except Exception as e:
logger.warning(f"[passkey] 用户注册验签失败(user_id={current_user.id}): {e}")
return jsonify({"error": "Passkey验证失败请重试"}), 400
credential_id = encode_credential_id(verified.credential_id)
public_key = encode_credential_id(verified.credential_public_key)
transports = get_credential_transports(credential)
device_name = normalize_device_name(data.get("device_name") if "device_name" in data else state.get("device_name"))
aaguid = str(verified.aaguid or "")
created_id = database.create_passkey(
"user",
int(current_user.id),
credential_id=credential_id,
public_key=public_key,
sign_count=int(verified.sign_count or 0),
device_name=device_name,
transports=transports,
aaguid=aaguid,
)
if not created_id:
return jsonify({"error": "该Passkey已绑定或保存失败"}), 400
session.pop(_USER_PASSKEY_REGISTER_SESSION_KEY, None)
return jsonify({"success": True, "id": int(created_id), "device_name": device_name})
@api_user_bp.route("/api/user/passkeys/<int:passkey_id>", methods=["DELETE"])
@login_required
def delete_user_passkey(passkey_id):
"""删除当前用户绑定的 Passkey 设备。"""
ok = database.delete_passkey("user", int(current_user.id), int(passkey_id))
if ok:
return jsonify({"success": True})
return jsonify({"error": "设备不存在或已删除"}), 404
@api_user_bp.route("/api/user/passkeys/client-error", methods=["POST"])
@login_required
def report_user_passkey_client_error():
"""上报浏览器端 Passkey 失败详情,便于排查兼容性问题。"""
data = request.get_json(silent=True) or {}
error_name = _truncate_text(data.get("name"), 120)
error_message = _truncate_text(data.get("message"), 400)
error_code = _truncate_text(data.get("code"), 120)
ua = _truncate_text(data.get("user_agent") or request.headers.get("User-Agent", ""), 300)
stage = _truncate_text(data.get("stage"), 80)
source = _truncate_text(data.get("source"), 80)
logger.warning(
"[passkey][client-error][user] user_id=%s stage=%s source=%s name=%s code=%s message=%s ua=%s",
current_user.id,
stage or "-",
source or "-",
error_name or "-",
error_code or "-",
error_message or "-",
ua or "-",
)
return jsonify({"success": True})
@api_user_bp.route("/api/run_stats", methods=["GET"]) @api_user_bp.route("/api/run_stats", methods=["GET"])
@login_required @login_required
def get_run_stats(): def get_run_stats():
@@ -587,7 +334,10 @@ def get_run_stats():
stats = database.get_user_run_stats(user_id) stats = database.get_user_run_stats(user_id)
current_running = _get_current_running_count(user_id) current_running = 0
for _, info in safe_iter_task_status_items():
if info.get("user_id") == user_id and info.get("status") == "运行中":
current_running += 1
return jsonify( return jsonify(
{ {
@@ -621,14 +371,6 @@ def get_kdocs_status_for_user():
login_required_flag = status.get("login_required", False) login_required_flag = status.get("login_required", False)
last_login_ok = status.get("last_login_ok") last_login_ok = status.get("last_login_ok")
# 重启后首次查询时,状态可能还是 None这里做一次轻量实时校验
if last_login_ok is None:
live_status = kdocs.refresh_login_status()
if live_status.get("success"):
logged_in = bool(live_status.get("logged_in"))
login_required_flag = not logged_in
last_login_ok = logged_in
# 判断是否在线 # 判断是否在线
is_online = not login_required_flag and last_login_ok is True is_online = not login_required_flag and last_login_ok is True

View File

@@ -31,7 +31,7 @@ def admin_required(f):
if is_api: if is_api:
return jsonify({"error": "需要管理员权限"}), 403 return jsonify({"error": "需要管理员权限"}), 403
return redirect(url_for("pages.admin_login_page")) return redirect(url_for("pages.admin_login_page"))
logger.debug(f"[admin_required] 管理员 {session.get('admin_username')} 访问 {request.path}") logger.info(f"[admin_required] 管理员 {session.get('admin_username')} 访问 {request.path}")
return f(*args, **kwargs) return f(*args, **kwargs)
return decorated_function return decorated_function

View File

@@ -2,80 +2,12 @@
# -*- coding: utf-8 -*- # -*- coding: utf-8 -*-
from __future__ import annotations from __future__ import annotations
import os
import time
from flask import Blueprint, jsonify from flask import Blueprint, jsonify
import database import database
import db_pool
from services.request_metrics import get_request_metrics_snapshot
from services.time_utils import get_beijing_now from services.time_utils import get_beijing_now
health_bp = Blueprint("health", __name__) health_bp = Blueprint("health", __name__)
_PROCESS_START_TS = time.time()
_INCLUDE_HEALTH_METRICS = str(os.environ.get("HEALTH_INCLUDE_METRICS", "0")).strip().lower() in {
"1",
"true",
"yes",
"on",
}
_EXPOSE_HEALTH_ERRORS = str(os.environ.get("HEALTH_EXPOSE_ERRORS", "0")).strip().lower() in {
"1",
"true",
"yes",
"on",
}
def _build_runtime_metrics() -> dict:
metrics = {
"uptime_seconds": max(0, int(time.time() - _PROCESS_START_TS)),
}
try:
pool_stats = db_pool.get_pool_stats() or {}
metrics["db_pool"] = {
"pool_size": int(pool_stats.get("pool_size", 0) or 0),
"available": int(pool_stats.get("available", 0) or 0),
"in_use": int(pool_stats.get("in_use", 0) or 0),
}
except Exception:
pass
try:
import psutil
proc = psutil.Process(os.getpid())
with proc.oneshot():
mem_info = proc.memory_info()
metrics["process"] = {
"rss_mb": round(float(mem_info.rss) / 1024 / 1024, 2),
"cpu_percent": round(float(proc.cpu_percent(interval=None)), 2),
"threads": int(proc.num_threads()),
}
except Exception:
pass
try:
from services import tasks as tasks_module
scheduler = getattr(tasks_module, "_task_scheduler", None)
if scheduler is not None:
queue_snapshot = scheduler.get_queue_state_snapshot() or {}
metrics["task_queue"] = {
"pending_total": int(queue_snapshot.get("pending_total", 0) or 0),
"running_total": int(queue_snapshot.get("running_total", 0) or 0),
}
except Exception:
pass
try:
metrics["requests"] = get_request_metrics_snapshot()
except Exception:
pass
return metrics
@health_bp.route("/health", methods=["GET"]) @health_bp.route("/health", methods=["GET"])
@@ -87,10 +19,7 @@ def health_check():
database.get_system_config() database.get_system_config()
except Exception as e: except Exception as e:
db_ok = False db_ok = False
if _EXPOSE_HEALTH_ERRORS: db_error = f"{type(e).__name__}: {e}"
db_error = f"{type(e).__name__}: {e}"
else:
db_error = "db_unavailable"
payload = { payload = {
"ok": db_ok, "ok": db_ok,
@@ -98,7 +27,5 @@ def health_check():
"db_ok": db_ok, "db_ok": db_ok,
"db_error": db_error, "db_error": db_error,
} }
if _INCLUDE_HEALTH_METRICS:
payload["metrics"] = _build_runtime_metrics()
return jsonify(payload), (200 if db_ok else 500) return jsonify(payload), (200 if db_ok else 500)

View File

@@ -6,7 +6,7 @@ import json
import os import os
from typing import Optional from typing import Optional
from flask import Blueprint, current_app, redirect, render_template, session, url_for from flask import Blueprint, current_app, redirect, render_template, request, session, url_for
from flask_login import current_user, login_required from flask_login import current_user, login_required
from routes.decorators import admin_required from routes.decorators import admin_required
@@ -15,45 +15,10 @@ from services.runtime import get_logger
pages_bp = Blueprint("pages", __name__) pages_bp = Blueprint("pages", __name__)
def _collect_entry_css_files(manifest: dict, entry_name: str) -> list[str]:
css_files: list[str] = []
seen_css: set[str] = set()
visited: set[str] = set()
def _append_css(entry_obj: dict) -> None:
for css_file in entry_obj.get("css") or []:
css_path = str(css_file or "").strip()
if not css_path or css_path in seen_css:
continue
seen_css.add(css_path)
css_files.append(css_path)
def _walk_manifest_key(manifest_key: str) -> None:
key = str(manifest_key or "").strip()
if not key or key in visited:
return
visited.add(key)
entry_obj = manifest.get(key)
if not isinstance(entry_obj, dict):
return
_append_css(entry_obj)
for imported_key in entry_obj.get("imports") or []:
_walk_manifest_key(imported_key)
entry = manifest.get(entry_name) or {}
if isinstance(entry, dict):
_append_css(entry)
for imported_key in entry.get("imports") or []:
_walk_manifest_key(imported_key)
return css_files
def render_app_spa_or_legacy( def render_app_spa_or_legacy(
legacy_template_name: str, legacy_template_name: str,
legacy_context: Optional[dict] = None, legacy_context: Optional[dict] = None,
spa_initial_state: Optional[dict] = None, spa_initial_state: Optional[dict] = None,
spa_entry_name: str = "index.html",
): ):
"""渲染前台 Vue SPA构建产物位于 static/app失败则回退旧模板。""" """渲染前台 Vue SPA构建产物位于 static/app失败则回退旧模板。"""
logger = get_logger() logger = get_logger()
@@ -63,9 +28,9 @@ def render_app_spa_or_legacy(
with open(manifest_path, "r", encoding="utf-8") as f: with open(manifest_path, "r", encoding="utf-8") as f:
manifest = json.load(f) manifest = json.load(f)
entry = manifest.get(spa_entry_name) or {} entry = manifest.get("index.html") or {}
js_file = entry.get("file") js_file = entry.get("file")
css_files = _collect_entry_css_files(manifest, spa_entry_name) css_files = entry.get("css") or []
if not js_file: if not js_file:
logger.warning(f"[app_spa] manifest缺少入口文件: {manifest_path}") logger.warning(f"[app_spa] manifest缺少入口文件: {manifest_path}")
@@ -107,6 +72,13 @@ def _get_asset_build_id(static_root: str, rel_paths: list[str]) -> Optional[str]
return str(int(max(mtimes))) return str(int(max(mtimes)))
def _is_legacy_admin_user_agent(user_agent: str) -> bool:
if not user_agent:
return False
ua = user_agent.lower()
return "msie" in ua or "trident/" in ua
@pages_bp.route("/") @pages_bp.route("/")
def index(): def index():
"""主页 - 重定向到登录或应用""" """主页 - 重定向到登录或应用"""
@@ -118,7 +90,7 @@ def index():
@pages_bp.route("/login") @pages_bp.route("/login")
def login_page(): def login_page():
"""登录页面""" """登录页面"""
return render_app_spa_or_legacy("login.html", spa_entry_name="login.html") return render_app_spa_or_legacy("login.html")
@pages_bp.route("/register") @pages_bp.route("/register")
@@ -153,6 +125,8 @@ def admin_login_page():
@admin_required @admin_required
def admin_page(): def admin_page():
"""后台管理页面""" """后台管理页面"""
if request.args.get("legacy") == "1" or _is_legacy_admin_user_agent(request.headers.get("User-Agent", "")):
return render_template("admin_legacy.html")
logger = get_logger() logger = get_logger()
manifest_path = os.path.join(current_app.root_path, "static", "admin", ".vite", "manifest.json") manifest_path = os.path.join(current_app.root_path, "static", "admin", ".vite", "manifest.json")
try: try:
@@ -164,8 +138,8 @@ def admin_page():
css_files = entry.get("css") or [] css_files = entry.get("css") or []
if not js_file: if not js_file:
logger.error(f"[admin_spa] manifest缺少入口文件: {manifest_path}") logger.warning(f"[admin_spa] manifest缺少入口文件: {manifest_path}")
return "后台前端资源缺失,请重新构建管理端", 503 return render_template("admin_legacy.html")
admin_spa_js_file = f"admin/{js_file}" admin_spa_js_file = f"admin/{js_file}"
admin_spa_css_files = [f"admin/{p}" for p in css_files] admin_spa_css_files = [f"admin/{p}" for p in css_files]
@@ -181,8 +155,8 @@ def admin_page():
admin_spa_build_id=admin_spa_build_id, admin_spa_build_id=admin_spa_build_id,
) )
except FileNotFoundError: except FileNotFoundError:
logger.error(f"[admin_spa] 未找到manifest: {manifest_path}") logger.warning(f"[admin_spa] 未找到manifest: {manifest_path},回退旧版后台模板")
return "后台前端资源未构建,请联系管理员", 503 return render_template("admin_legacy.html")
except Exception as e: except Exception as e:
logger.error(f"[admin_spa] 加载manifest失败: {e}") logger.error(f"[admin_spa] 加载manifest失败: {e}")
return "后台页面加载失败,请稍后重试", 500 return render_template("admin_legacy.html")

View File

@@ -1,60 +0,0 @@
# 健康监控(邮件版)
本目录提供 `health_email_monitor.py`,通过调用 `/health` 接口并使用**容器内已有邮件配置**发告警邮件。
## 1) 快速试跑
```bash
cd /root/zsglpt
python3 scripts/health_email_monitor.py \
--to 你的告警邮箱@example.com \
--container knowledge-automation-multiuser \
--url http://127.0.0.1:51232/health \
--dry-run
```
去掉 `--dry-run` 即会实际发邮件。
## 2) 建议 cron每分钟
```bash
* * * * * cd /root/zsglpt && /usr/bin/python3 scripts/health_email_monitor.py \
--to 你的告警邮箱@example.com \
--container knowledge-automation-multiuser \
--url http://127.0.0.1:51232/health \
>> /root/zsglpt/logs/health_monitor.log 2>&1
```
## 3) 支持的规则
- `service_down`:健康接口请求失败(立即告警)
- `health_fail`:返回 `ok/db_ok` 异常或 HTTP 5xx立即告警
- `db_pool_exhausted`:连接池耗尽(默认连续 3 次才告警)
- `queue_backlog_high`:任务堆积过高(默认 `pending_total >= 50` 且连续 5 次)
脚本支持恢复通知(规则恢复正常会发“恢复”邮件)。
## 4) 常用参数
- `--to`:收件人(必填)
- `--container`Docker 容器名(默认 `knowledge-automation-multiuser`
- `--url`:健康地址(默认 `http://127.0.0.1:51232/health`
- `--state-file`:状态文件路径(默认 `/tmp/zsglpt_health_monitor_state.json`
- `--remind-seconds`:重复告警间隔(默认 3600 秒)
- `--queue-threshold`:队列告警阈值(默认 50
- `--queue-streak`:队列连续次数阈值(默认 5
- `--db-pool-streak`:连接池连续次数阈值(默认 3
## 5) 环境变量方式(可选)
也可不用命令行参数,改用环境变量:
- `MONITOR_EMAIL_TO`
- `MONITOR_DOCKER_CONTAINER`
- `HEALTH_URL`
- `MONITOR_STATE_FILE`
- `MONITOR_REMIND_SECONDS`
- `MONITOR_QUEUE_THRESHOLD`
- `MONITOR_QUEUE_STREAK`
- `MONITOR_DB_POOL_STREAK`

Some files were not shown because too many files have changed in this diff Show More