![]() ![]() If it works, feel free to try removing any useless part.Īlso note, if upgrading your opencv version/build that VideoWriter API in python may have changed (with second argument being now the used API, such as cv2.CAP_GSTREAMER or cv2.CAP_ANY), but it doesn’t seem to be your case since you have a working case. I had shortly experimented with UDP streaming 9 and found that default profile used by nvv4l2h264enc was higher than omxh264enc in my case, and even setting the fastest preset, it was loosing sync through UDP while omxh264enc was keeping it (maybe I missed some options).Īdd rtph264pay config-interval=1 between h264parse and rstpclientsink. Replace nvv4l2h264enc by omxh264enc as it seems better. This may have some system overhead, but may work for low bitrate, or may require some optimization for higher bitrates. test-launch "shmsrc socket-path=/tmp/my_h264_sock do-timestamp=1 ! video/x-h264, stream-format=byte-stream, width=640, height=480, framerate=30/1 ! h264parse ! video/x-h264, stream-format=byte-stream ! rtph264pay pt=96 name=pay0 " Where width and height are the sizes of pushed frames, and then use shmsrc doing timestamp as source for test-launch RTSP server such as. If you need color images, thenyou need to change the pix_fmt in the ffmpeg's command, reading (width * height * channels) bytes, and then reshaping it correctly to one more axis.Īnother option would be to have an opencv VideoWriter encoding H264 frames and sending to shmsink: h264_shmsink = cv2.VideoWriter( "appsrc is-live=true ! queue ! videoconvert ! video/x-raw, format=BGRx ! nvvidconv ! " "nvv4l2h264enc insert-sps-pps=1 ! video/x-h264, stream-format=byte-stream ! h264parse ! shmsink socket-path=/tmp/my_h264_sock ",Ĭv2.CAP_GSTREAMER, 0, float(fps), ( int(width), int(height))) Return np.reshape(vec, (self.height, self.width)) Vec = np.frombuffer(self._last_chunk, dtype=dt) Logging.warning( 'Reloading ffmpeg process.') Time.sleep( 0.125) # Put your FPS threshold here if time.time() - started > self.MAX_FRAME_WAIT: ![]() The drawback here is that ffmpeg open a whole videostream, so a consequence is that. So i'm going with a solutin i found here Handle large number of rtsp cameras without real-time constraint. My current solution is based on doing that with opencv, but i want to optimize this. While self._last_frame_read = ame_number: Im looking for a way to extract one single image from an rtsp stream. Self.process = subprocess.Popen(args, stdout=subprocess.PIPE, stderr=FNULL) bug( 'Opening ffmpeg process with command "%s"' % command) format(rtsp=self.rtsp_url, width=self.width, height=self.height) Self.launch_string = 'appsrc name=source is-live=true block=true format=GST_FORMAT_TIME ' \ Self.duration = 1 / self.fps * Gst.SECOND # duration of a frame in nanoseconds Super(SensorFactory, self)._init_(**properties) Gi.require_version( 'GstRtspServer', '1.0')įrom gi.repository import Gst, GstRtspServer, GObjectĬlass SensorFactory(GstRtspServer.RTSPMediaFactory): I also tried another solution based on Write opencv frames into gstreamer rtsp server pipeline import cv2 Out = cv2.VideoWriter( 'appsrc ! videoconvert ! ' 'x264enc noise-reduction=10000 speed-preset=ultrafast Can anyone assist with proper arguments to gst-launch-1.0? The ones I tried got stuck in "Pipeline is PREROLLING" import cv2 I tried the following based on Write in Gstreamer pipeline from opencv in python, but I was unable to figure out what the appropriate gst-launch-1.0 arguments should be to create the rtsp server. The first point I don't understand is why are you using subprocess module, as I see it, this is unnecesary (or solved on the background by ffmpeg wrapper).My goal is to read frames from an rtsp server, do some opencv manipulation, and write the manipulated frames to a new rtsp server. In_frame = read_frame( process1, width, height) Process2 = start_ffmpeg_process2( width, height) Process1 = start_ffmpeg_process1( in_filename) Width, height = get_video_size( in_filename) #img1 = cv2.cvtColor(frame, cv2.COLOR_RGB2BGR) cv2. read( frame_size)Īssert len( in_bytes) = frame_size frame = ( frame_size = width * height * 3 in_bytes = process1. PIPE)ĭef read_frame( process1, width, height): Return subprocess.Popen(args,stdout = subprocess.PIPE) output('pipe:', format='image2pipe', pix_fmt='rgb24',vcodec = 'rawvideo') Video_info = next(s for s in probe if s = 'video') Now,i want to require rtsp via ffmpeg-python and show it via opencv. ![]()
0 Comments
Leave a Reply. |
AuthorWrite something about yourself. No need to be fancy, just an overview. ArchivesCategories |