当前位置: 首页 > news >正文

opencv3.4.2调用训练好的Openpose模型

介绍

opencv除了支持常用的物体检测模型和分类模型之外,还支持openpose模型,同样是线下训练和线上调用。这里不做特别多的介绍,先把源代码和数据放出来~

实验数据

模型文件:pose_iter_440000.caffemodel
.protxt文件:https://github.com/opencv/opencv_extra,下载后,找到dnn文件夹,在内部有很多.protxt文件,都是opencv自带的。

源代码

import cv2 as cv
import numpy as npdataset = 'MPI'
if dataset == 'COCO':BODY_PARTS = { "Nose": 0, "Neck": 1, "RShoulder": 2, "RElbow": 3, "RWrist": 4,"LShoulder": 5, "LElbow": 6, "LWrist": 7, "RHip": 8, "RKnee": 9,"RAnkle": 10, "LHip": 11, "LKnee": 12, "LAnkle": 13, "REye": 14,"LEye": 15, "REar": 16, "LEar": 17, "Background": 18 }POSE_PAIRS = [ ["Neck", "RShoulder"], ["Neck", "LShoulder"], ["RShoulder", "RElbow"],["RElbow", "RWrist"], ["LShoulder", "LElbow"], ["LElbow", "LWrist"],["Neck", "RHip"], ["RHip", "RKnee"], ["RKnee", "RAnkle"], ["Neck", "LHip"],["LHip", "LKnee"], ["LKnee", "LAnkle"], ["Neck", "Nose"], ["Nose", "REye"],["REye", "REar"], ["Nose", "LEye"], ["LEye", "LEar"] ]
else:assert(dataset == 'MPI')BODY_PARTS = { "Head": 0, "Neck": 1, "RShoulder": 2, "RElbow": 3, "RWrist": 4,"LShoulder": 5, "LElbow": 6, "LWrist": 7, "RHip": 8, "RKnee": 9,"RAnkle": 10, "LHip": 11, "LKnee": 12, "LAnkle": 13, "Chest": 14,"Background": 15 }POSE_PAIRS = [ ["Head", "Neck"], ["Neck", "RShoulder"], ["RShoulder", "RElbow"],["RElbow", "RWrist"], ["Neck", "LShoulder"], ["LShoulder", "LElbow"],["LElbow", "LWrist"], ["Neck", "Chest"], ["Chest", "RHip"], ["RHip", "RKnee"],["RKnee", "RAnkle"], ["Chest", "LHip"], ["LHip", "LKnee"], ["LKnee", "LAnkle"] ]inWidth = 368
inHeight = 368
thr = 0.1
protoc = "/home/oliver/下载/opencv_extra-master/testdata/dnn/openpose_pose_coco.prototxt"
model = "/home/oliver/下载/pose_iter_440000.caffemodel"
net = cv.dnn.readNetFromCaffe(protoc, model)cap = cv.VideoCapture(0)
height = cap.get(cv.CAP_PROP_FRAME_HEIGHT)
width = cap.get(cv.CAP_PROP_FRAME_WIDTH)
video_writer = cv.VideoWriter("D:/pose_estimation_demo.mp4", cv.VideoWriter_fourcc('D', 'I', 'V', 'X'), 15, (640, 480), True)
while cv.waitKey(1) < 0:hasFrame, frame = cap.read()if not hasFrame:cv.waitKey()breakframeWidth = frame.shape[1]frameHeight = frame.shape[0]inp = cv.dnn.blobFromImage(frame, 1.0 / 255, (inWidth, inHeight),(0, 0, 0), swapRB=False, crop=False)net.setInput(inp)out = net.forward()print(len(BODY_PARTS), out.shape[0])# assert(len(BODY_PARTS) == out.shape[1])points = []for i in range(len(BODY_PARTS)):# Slice heatmap of corresponging body's part.heatMap = out[0, i, :, :]# Originally, we try to find all the local maximums. To simplify a sample# we just find a global one. However only a single pose at the same time# could be detected this way._, conf, _, point = cv.minMaxLoc(heatMap)x = (frameWidth * point[0]) / out.shape[3]y = (frameHeight * point[1]) / out.shape[2]# Add a point if it's confidence is higher than threshold.points.append((x, y) if conf > thr else None)for pair in POSE_PAIRS:partFrom = pair[0]partTo = pair[1]assert(partFrom in BODY_PARTS)assert(partTo in BODY_PARTS)idFrom = BODY_PARTS[partFrom]idTo = BODY_PARTS[partTo]if points[idFrom] and points[idTo]:x1, y1 = points[idFrom]x2, y2 = points[idTo]cv.line(frame, (np.int32(x1), np.int32(y1)), (np.int32(x2), np.int32(y2)), (0, 255, 0), 3)cv.ellipse(frame, (np.int32(x1), np.int32(y1)), (3, 3), 0, 0, 360, (0, 0, 255), cv.FILLED)cv.ellipse(frame, (np.int32(x2), np.int32(y2)), (3, 3), 0, 0, 360, (0, 0, 255), cv.FILLED)t, _ = net.getPerfProfile()freq = cv.getTickFrequency() / 1000cv.putText(frame, '%.2fms' % (t / freq), (10, 20), cv.FONT_HERSHEY_SIMPLEX, 0.5, (0, 0, 0))# video_writer.write(frame);# cv.imwrite("D:/pose.png", frame)cv.imshow('OpenPose using OpenCV', frame)

实验结果

在这里插入图片描述在这里插入图片描述


http://www.taodudu.cc/news/show-1782115.html

相关文章:

  • 【physx/wasm】在physx中添加自定义接口并重新编译wasm
  • excel---常用操作
  • Lora训练Windows[笔记]
  • linux基础指令讲解(ls、pwd、cd、touch、mkdir)
  • InnoDB 事务处理机制
  • 启明云端ESP32 C3 模组WT32C3通过 MQTT 连接 AWS
  • python训练mask rcnn模型C++调用训练好的模型--基于opencv4.0(干货满满)
  • leetcode之移除链表的元素
  • leetcode之奇偶链表
  • leetcode之回文链表
  • ubuntu16.04下安装配置caffe2和detectron(亲测有效,非常简单)
  • leetcode之字符串中的第一个唯一字符
  • 哈希表中处理冲突的方法
  • SENet算法解析
  • SNIP物体检测算法理解
  • yolov3聚类自己数据的anchor box
  • Ubuntu下安装qt57creator-plugin-ros,在QT中进行ROS开发(亲测有效)
  • ROS下面调用自定义的头文件和.cpp/.so文件(亲测有效)
  • C++调用编译好的darknet来进行物体监测
  • hard-negative mining详细介绍
  • yolov3中如何进行聚类得到anchor box的
  • ubuntu16.04下编译安装Autoware
  • catkin_make和cmake
  • 深度学习三种分割定义
  • 基于激光雷达点云数据的目标检测
  • 递归和循环两种方式求解连续数的相加
  • PCL中把txt文件转换成.pcd文件(很简单)
  • pcl对点云进行直通滤波
  • error: (-205:Formats of input arguments do not match) All the matrices must have the same data type
  • ubuntu16.04下Qt无法输入中文注释
  • ROS下sensor_msgs::ImagePtr到sensor_msgs::Image之间的转换
  • ROS下同时接收多个话题并实现相机和雷达的数据融合
  • 柱状图之最大矩形面积
  • 哈希函数的构造方法以及哈希表解决冲突的方式
  • 回旋镖的数量
  • 伪代码之KMeans和DBSCAN