Even when using zero copy, I thought the DMA would still report the actual packet size, so why would the size need to be changed for incoming packets.
Sent from my Sprint Samsung Galaxy S® 6.——– Original message ——–From: Hein Tibosch
heinbali01@users.sourceforge.net Date: 3/10/2018 04:03 (GMT-05:00) To: “[freertos:discussion]”
382005@discussion.freertos.p.re.sourceforge.net Subject: [freertos:discussion] Network Driver length
The comments in the BufferAllocation_2.c code that supports
pxGetNetworkBufferWithDescriptor(), show that in some outgoing
packets, it accounts for ICMP packets getting replaced by ARP
requests, and therefore, the packet minimum size is adjusted.
ARP packets are the smallest packets with 42 bytes. So changing an ICMP into an ARP packet is OK, it is getting smaller. Only when an Ethernet buffer grows, it must be re-allocated (when
using BufferAllocation_2.c
).
But for incoming packets, I can’t see any reason for this, and I’m
new to +TCP so I don’t have enough experience.
If
ipconfigFTP_RX_ZERO_COPY
is non-zero: incoming packets are received in Ethernet buffers that have a maximum size of about MTU + 14.
When the Network Interface uses
memcpy()
, it has big DMA buffers, and for the Ethernet buffers, it allocates just enough bytes to hold a packet. Still,
baMINIMAL_BUFFER_SIZE
is always respected as a minimum size.
Does this answer your question?