从零到一:基于Facenet-PyTorch与自定义数据集构建人脸识别系统

张开发
2026/4/17 22:50:26 15 分钟阅读

分享文章

从零到一:基于Facenet-PyTorch与自定义数据集构建人脸识别系统
1. 环境准备与工具安装第一次接触人脸识别系统开发时我对着各种框架和库的文档看得头晕眼花。后来发现用Facenet-PyTorch这个开源项目入门特别友好它把复杂的人脸检测和特征提取功能都封装好了我们只需要关注自己的业务逻辑就行。Facenet-PyTorch主要包含两个核心组件MTCNN人脸检测器和InceptionResnetV1特征提取网络。MTCNN就像个智能摄像头能自动找到图片中的人脸位置而InceptionResnetV1则像人脑的视觉皮层能把人脸图像转换成512维的特征向量。这两个组件配合使用就能实现端到端的人脸识别。安装方式有三种我推荐新手先用pip安装试试水pip install facenet-pytorch如果遇到网络问题可以尝试清华源pip install facenet-pytorch -i https://pypi.tuna.tsinghua.edu.cn/simple进阶用户可以直接克隆GitHub仓库这样能随时查看和修改源码git clone https://github.com/timesler/facenet-pytorch.git cd facenet-pytorch pip install -e .我最近在帮学校实验室搭建门禁系统时发现Colab环境特别适合做原型验证。在Notebook里运行前记得加上感叹号!pip install facenet-pytorch安装完成后建议先跑个简单的测试脚本验证环境是否正常from facenet_pytorch import MTCNN mtcnn MTCNN() print(MTCNN加载成功)2. 数据准备与预处理去年给小区物业做人脸门禁时最头疼的就是数据收集。后来发现用手机拍视频再抽帧是个好办法让被采集者在镜头前缓慢转头然后用OpenCV每10帧提取一张图片这样能获得不同角度的面部数据。数据目录建议按这个结构组织data/ ├── train/ │ ├── person1/ │ │ ├── img1.jpg │ │ └── img2.jpg │ └── person2/ │ ├── img1.jpg │ └── img2.jpg └── val/ ├── person1/ └── person2/用MTCNN处理原始图片时我总结出几个实用参数image_size160输出人脸图像尺寸margin20在人脸周围多保留20像素背景keep_allFalse只保留检测概率最高的人脸from facenet_pytorch import MTCNN import os mtcnn MTCNN( image_size160, margin20, min_face_size40, thresholds[0.6, 0.7, 0.7], devicecuda if torch.cuda.is_available() else cpu ) def process_images(input_dir, output_dir): for person in os.listdir(input_dir): person_dir os.path.join(input_dir, person) save_dir os.path.join(output_dir, person) os.makedirs(save_dir, exist_okTrue) for img_name in os.listdir(person_dir): img_path os.path.join(person_dir, img_name) img Image.open(img_path) img_cropped mtcnn(img, save_pathos.path.join(save_dir, img_name))数据增强我常用这些组合from torchvision import transforms train_transform transforms.Compose([ transforms.RandomHorizontalFlip(), transforms.ColorJitter(brightness0.3, contrast0.3), transforms.ToTensor(), transforms.Normalize([0.5, 0.5, 0.5], [0.5, 0.5, 0.5]) ])3. 模型训练与调优第一次训练时我直接用了默认参数结果准确率只有70%左右。后来发现学习率和批大小对结果影响很大经过多次实验找到一组黄金参数optimizer optim.Adam(model.parameters(), lr0.0005, weight_decay1e-4) scheduler MultiStepLR(optimizer, milestones[10, 20], gamma0.1)在训练过程中我发现这几个技巧很实用使用TensorBoard监控训练过程from torch.utils.tensorboard import SummaryWriter writer SummaryWriter() for epoch in range(epochs): writer.add_scalar(Loss/train, train_loss, epoch) writer.add_scalar(Accuracy/val, val_acc, epoch)早停机制防止过拟合best_acc 0 for epoch in range(30): train(...) val_acc validate(...) if val_acc best_acc: best_acc val_acc torch.save(model.state_dict(), best_model.pth) patience 3 else: patience - 1 if patience 0: break混合精度训练加速scaler torch.cuda.amp.GradScaler() with torch.cuda.amp.autocast(): outputs model(inputs) loss criterion(outputs, labels) scaler.scale(loss).backward() scaler.step(optimizer) scaler.update()完整训练代码示例from facenet_pytorch import InceptionResnetV1 import torch.nn as nn class FaceNet(nn.Module): def __init__(self, num_classes): super().__init__() self.backbone InceptionResnetV1(pretrainedvggface2) self.classifier nn.Linear(512, num_classes) def forward(self, x): features self.backbone(x) return self.classifier(features) model FaceNet(num_classes10).to(device) criterion nn.CrossEntropyLoss()4. 模型部署与优化在树莓派上部署时我发现原始模型太大于是用这个技巧压缩模型# 模型量化 quantized_model torch.quantization.quantize_dynamic( model, {torch.nn.Linear}, dtypetorch.qint8 ) torch.jit.save(torch.jit.script(quantized_model), quantized.pt)实时检测的优化技巧使用多线程处理视频流from threading import Thread import queue frame_queue queue.Queue(maxsize10) def capture_thread(camera): while True: ret, frame camera.read() frame_queue.put(frame) Thread(targetcapture_thread, args(camera,)).start()人脸跟踪减少计算量tracker None for frame in video_stream: if tracker is None: faces detect_faces(frame) if faces: tracker create_tracker(faces[0]) else: success, box tracker.update(frame)ONNX格式导出提升推理速度dummy_input torch.randn(1, 3, 160, 160).to(device) torch.onnx.export( model, dummy_input, model.onnx, input_names[input], output_names[output], dynamic_axes{input: {0: batch}, output: {0: batch}} )完整的推理代码示例def predict(image_path): img Image.open(image_path).convert(RGB) img_tensor transform(img).unsqueeze(0).to(device) with torch.no_grad(): features model(img_tensor) _, pred torch.max(features, 1) return class_names[pred.item()] # 测试单张图片 result predict(test.jpg) print(f识别结果: {result})5. 常见问题排查在开发过程中我踩过不少坑这里分享几个典型问题的解决方法CUDA内存不足减小批大小batch_size8或16使用梯度累积optimizer.zero_grad() for i, (inputs, labels) in enumerate(train_loader): outputs model(inputs) loss criterion(outputs, labels) loss.backward() if (i1) % 4 0: optimizer.step() optimizer.zero_grad()人脸检测不准调整MTCNN阈值mtcnn MTCNN(thresholds[0.4, 0.5, 0.5]) # 降低阈值对视频使用跟踪算法减少抖动类别不平衡问题from torch.utils.data import WeightedRandomSampler class_counts [...] # 每个类别的样本数 weights 1. / torch.tensor(class_counts, dtypetorch.float) samples_weights weights[labels] sampler WeightedRandomSampler(samples_weights, len(samples_weights))模型不收敛检查清单检查数据预处理是否与预训练模型匹配尝试更小的学习率如0.0001添加梯度裁剪torch.nn.utils.clip_grad_norm_(model.parameters(), 1.0)部署时的版本兼容问题# 创建兼容性环境 conda create -n deploy python3.8 pip install torch1.8.0cu111 -f https://download.pytorch.org/whl/torch_stable.html pip install facenet-pytorch2.5.26. 进阶优化技巧当基本模型跑通后我通常会做这些优化来提升性能特征融合技术class EnhancedFaceNet(nn.Module): def __init__(self): super().__init__() self.backbone InceptionResnetV1(pretrainedFalse) self.attention nn.Sequential( nn.Linear(512, 128), nn.ReLU(), nn.Linear(128, 512), nn.Sigmoid() ) def forward(self, x): features self.backbone(x) attention self.attention(features) return features * attention难样本挖掘def hard_example_mining(features, labels): distance_matrix pairwise_distance(features) positive_mask labels.unsqueeze(0) labels.unsqueeze(1) hardest_positive (distance_matrix * positive_mask.float()).max(1)[0] negative_mask ~positive_mask hardest_negative (distance_matrix 1e6 * positive_mask.float()).min(1)[0] return hardest_positive, hardest_negative知识蒸馏teacher_model InceptionResnetV1(pretrainedvggface2).eval() student_model SmallFaceNet() for inputs, labels in train_loader: with torch.no_grad(): teacher_logits teacher_model(inputs) student_logits student_model(inputs) loss 0.3 * criterion(student_logits, labels) 0.7 * mse_loss(student_logits, teacher_logits)多任务学习class MultiTaskModel(nn.Module): def __init__(self): super().__init__() self.backbone InceptionResnetV1(pretrainedTrue) self.classifier nn.Linear(512, num_classes) self.attribute nn.Linear(512, 5) # 性别、年龄等属性 def forward(self, x): features self.backbone(x) return self.classifier(features), self.attribute(features)模型剪枝from torch.nn.utils import prune parameters_to_prune [ (module, weight) for module in filter( lambda m: isinstance(m, nn.Conv2d), model.modules() ) ] prune.global_unstructured( parameters_to_prune, pruning_methodprune.L1Unstructured, amount0.2 )7. 实际应用案例去年为某企业开发的考勤系统中我们遇到了光照条件复杂的问题。最终解决方案是数据层面添加随机光照增强transforms.ColorJitter( brightness0.5, contrast0.3, saturation0.2, hue0.1 )使用Gamma校正预处理def adjust_gamma(image, gamma1.0): invGamma 1.0 / gamma table np.array([((i / 255.0) ** invGamma) * 255 for i in np.arange(0, 256)]).astype(uint8) return cv2.LUT(image, table)模型层面添加光照鲁棒性损失class IlluminationLoss(nn.Module): def forward(self, features1, features2): return 1 - torch.cosine_similarity(features1, features2, dim1).mean()系统层面动态质量评估def image_quality_score(img): gray cv2.cvtColor(img, cv2.COLOR_BGR2GRAY) fm cv2.Laplacian(gray, cv2.CV_64F).var() return fm 50 # 阈值根据实际情况调整另一个有意思的应用是相册自动分类。我们开发了这样的处理流程使用MTCNN检测相册中所有人脸对每张人脸提取512维特征用层次聚类算法自动分组人工核对后生成标签核心聚类代码from sklearn.cluster import DBSCAN features [...] # 所有人脸特征 clustering DBSCAN(eps0.5, min_samples3).fit(features) for label in set(clustering.labels_): if label -1: continue # 噪声点 print(f类别{label}包含{sum(clustering.labels_label)}张人脸)8. 持续学习与模型更新线上系统运行一段时间后我发现模型性能会逐渐下降。后来设计了这个增量学习方案新数据收集接口app.route(/feedback, methods[POST]) def feedback(): img request.files[image] label request.form[label] save_to_training_set(img, label) return 反馈已接收增量训练脚本def incremental_train(new_data_dir): new_dataset datasets.ImageFolder(new_data_dir) combined_dataset ConcatDataset([original_dataset, new_dataset]) # 只训练最后一层 for param in model.backbone.parameters(): param.requires_grad False train_loader DataLoader(combined_dataset, batch_size32) optimizer optim.SGD(model.classifier.parameters(), lr0.001) for epoch in range(5): train_one_epoch(model, train_loader, optimizer)模型版本管理# 保存带时间戳的模型版本 import datetime timestamp datetime.datetime.now().strftime(%Y%m%d_%H%M) torch.save(model.state_dict(), fmodel_{timestamp}.pth) # 模型回滚机制 if new_model_performance threshold: load_previous_model()性能监控面板# 使用Prometheus记录关键指标 from prometheus_client import Summary, Gauge REQUEST_TIME Summary(request_processing_seconds, Time spent processing request) ACCURACY Gauge(model_accuracy, Current model accuracy) REQUEST_TIME.time() def process_request(input): result model(input) ACCURACY.set(calculate_accuracy()) return result最近在开发边缘设备部署方案时发现TensorRT能显著提升推理速度。这是我们的优化流程转换模型import tensorrt as trt logger trt.Logger(trt.Logger.INFO) builder trt.Builder(logger) network builder.create_network() parser trt.OnnxParser(network, logger) with open(model.onnx, rb) as f: parser.parse(f.read()) config builder.create_builder_config() config.max_workspace_size 1 30 # 1GB engine builder.build_engine(network, config)部署优化# 使用内存映射加速加载 with open(model.engine, wb) as f: f.write(engine.serialize()) # 运行时加载 runtime trt.Runtime(logger) with open(model.engine, rb) as f: engine runtime.deserialize_cuda_engine(f.read())性能对比原始PyTorch模型45ms/帧ONNX Runtime28ms/帧TensorRT优化后12ms/帧

更多文章