![]() ![]() Google Patents US8090598B2 - Monitoring system for determining and communicating a cost of insurance Instead, append each packet in a list and when you received all of the data, "".join(packets) them all together once at the end.US8090598B2 - Monitoring system for determining and communicating a cost of insurance That creates large numbers of new temporary string objects and copies data around all the time. don't use string concatenation to build up the receiver's buffer as it gets packets.But as you're not providing any observed numbers it is hard to say really. Try to use threads instead ( threading + queue modules). don't use multiprocessing with processes, the serialization that has to occur to send data from one process to the other may very well be a performance bottleneck if we're talking ~9000 calls/sec.use socket.recvfrom instead of sock.recv when dealing with UDP sockets.Now, some suggestions to improve the code: If it's mostly idle, there is something else going on and the problem is not related to the performance of the code. If you're hitting 100% that may very well the reason of the packet loss. tsockopt(socket.SOL_SOCKET, socket.SO_RCVBUF, 30000000)Ī few suggestions to improve your code, but first a question: have you measured at all what might be slowing things down? For instance have you looked at the CPU usage of your system. Mreq = struct.pack("4sl", socket.inet_aton(MCAST_GRP), socket.INADDR_ANY) Real_data_buffer = for data in data_buffer] when the last data package has arrived import time Processor is at ~25% (on the raspberry pi) The key is to consume the data when there's a pause in the packet stream, i.e. I can't find how to increase this using Python? Is this even possible?Ģ) I managed to lose only about 10% using the code below. I'm quite sure now the problem is that the producer thread doesn't get enough priority. T1.daemon = True # so they stop when the main prog stopsġ) I tried threading+queue already, also the ''.join(), didn't seem to make much difference. Packet_stream = PacketStream('D:/bmpdump/') tsockopt(socket.IPPROTO_IP, socket.IP_ADD_MEMBERSHIP, mreq) Mreq = struct.pack('4sl', socket.inet_aton(MCAST_GRP), socket.INADDR_ANY) tsockopt(socket.SOL_SOCKET, socket.SO_REUSEADDR, 1) Sock = socket.socket(socket.AF_INET, socket.SOCK_DGRAM, socket.IPPROTO_UDP) Print 'Image lost: %s bytes missing.' % (307200 - n_bytes) Self.img_id = -1 # -1 = waiting for start of new image I also tried increasing the socket buffer with tsockopt(socket.SOL_SOCKET, socket.SO_RCVBUF, 3000000)įrom multiprocessing import Process, Queue I've tried lots of different things, like threading instead of multiprocessing, Pipe instead of Queue. Therefore I think it's a problem with the code. Surely my windows 7 pc should be able to handle this? I'm also using it on a raspberry pi 3, and there more or less the same number of packets is missed. I can receive all packets perfectly using tcpdump or wireshark.īut using the code below, packets are missed. Altough this happens at a faster rate in bursts per image, at ~22 packets/ms. Since 15 images are sent per second, ~9000 packets are sent per second. These contain the bytes from the image (508 bytes/packet). Then 612 packets are sent with first byte = 0x02. This contains information about the image. When the camera sends a new image, it first sends one packet, with first byte = 0x01. ![]() I've written a class PacketStream which writes the bytes from the packages to a. Producer function catches the packets, consumer function processes them and writes the images to. It uses multiprocessing to allow the producer and consumer function to work in parallel. It's a camera that sends 15 images per second using UDP multicast. I know I should use TCP if I don't want packet loss, but I don't have (full) controll over the sender. I'm having a lot of packet loss using UDP in python. ![]()
0 Comments
Leave a Reply. |