Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Large (-er than average) Messages sizes, cause incomplete transmission silently #38

Open
albassort opened this issue Feb 22, 2025 · 7 comments

Comments

@albassort
Copy link

I was trying to send packets with around a size of 12932, and, there were substantial issues. The biggest one being, that messages would stop sending midway, making the messages unparsable on the other end. Reducing the packet size to 1000 fixed this.

@oyyd
Copy link
Member

oyyd commented Feb 25, 2025

Are you using Dgram Sockets? IIRC, it depends on your system configuration, the too large packets will be dropped by OS silently. So one way is to change your system configuration if you need transfer large packets. I will try to investigate this later.

@albassort
Copy link
Author

No this was Seqpacket socket. I'm not sure the kernel variable that would be between the hard error of message too big and the soft error of dropping it.

@oyyd
Copy link
Member

oyyd commented Mar 10, 2025

Do you have an example to reproduce the issue? As I try to reproduce this in my debian server but everything looks good to me:

const { SeqpacketServer, SeqpacketSocket } = require('./js/index');
const os = require('os');
const path = require('path');
const fs = require('fs');

const bindPath = path.resolve(os.tmpdir(), './my_seqpacket.sock');

try {
  fs.unlinkSync(bindPath);
} catch (e) {}

const server = new SeqpacketServer();
server.listen(bindPath);
server.on('connection', (socket) => {
  socket.on('data', (buf) => {
    console.log('received', buf.length);
  });
});

const client = new SeqpacketSocket();
client.connect(bindPath, () => {
  setInterval(() => {
    const data = Buffer.alloc(14000);
    client.write(data);
  }, 100)
});

Output:

$ node test.js 
received 14000
received 14000
received 14000
received 14000

@albassort
Copy link
Author

albassort commented Mar 10, 2025

I deeply apologize for the lack of a minimal example, and the high dependencyness of it. This is using the code I have. The nim is minimal, and you can probably code it in rust easily, and, you can replace puppeteer with a big file or random garbage if needed!

const puppeteer = require('puppeteer');
const { SeqpacketServer, SeqpacketSocket } = require('node-unix-socket');
const fs = require('fs');
const path = require('path')
function splitBuffer(buffer, maxLength) {
  let result = []
  if (maxLength > buffer.length - 16) {
    let numbers = Buffer.alloc(9)
    numbers.writeUInt8(1, 0)
    numbers.writeBigInt64BE(BigInt(buffer.length), 1)
    let resultBuffer = Buffer.concat([numbers, buffer])
    result.push(resultBuffer)
    return result;
  }
  let totalSent = 0
  let length = buffer.length
  while (true) {
    let totalTransfer = maxLength - 9
    let toTransfer = 0
    if (totalSent + totalTransfer > length) {
      toTransfer = length - totalSent
    }
    else {
      toTransfer = totalTransfer
    }
    let numbers = Buffer.alloc(9)

    numbers.writeUInt8(
      toTransfer == totalTransfer
        ? 0 : 1, 0);

    numbers.writeBigInt64BE(BigInt(toTransfer), 1)


    let newBuffer = buffer.slice(totalSent, totalSent + toTransfer)

    result.push(Buffer.concat([numbers, newBuffer]))

    totalSent += toTransfer
    if (toTransfer != totalTransfer) {
      break
    }

  }
  return result;
}

let browser;
const bindPath = path.resolve('./temp.sock');
try {
  fs.unlinkSync(bindPath);
} catch (e) { }


const server = new SeqpacketServer();
server.listen(bindPath);

async function main(socket) {
  browser = await puppeteer.launch(
    {
      headless: false,
    }
  );

  page = await browser.newPage();

  await page.goto("https://youtube.com")

  await page.waitForFunction(
    'window.performance.timing.loadEventEnd - window.performance.timing.navigationStart >= 1000'
  );
  const html = await page.content();
  await browser.close()

  let jsonObj = JSON.stringify({ htmlData: html })

  let base64 = Buffer.from(Buffer.from(jsonObj));
  let buffers = splitBuffer(base64, 11920)
  buffers.forEach(buffer => {
    socket.write(buffer)
  })
}

server.on('connection', async (socket, bindPath) => {

  socket.on("data", async () => {

    main(socket)
  });

})
import net
import std/endians  
import json
import strutils
proc recvJson(a : Socket, timeout = -1) : string = 
  var data : array[9, byte]
  var littleEndian : array[8, byte]
  var jsonData : string
  while true:
    discard a.recv(addr data[0], 9, timeout = timeout)
    swapEndian64(addr littleEndian[0], addr data[1])
    let ending  = cast[bool](data[0])
    var jsonLength = cast[int64](littleEndian)
    discard a.recv(jsonData, jsonLength, timeout = timeout)
    result.add(jsonData) 
    if ending:
      break

  echo parseJson(result.join(""))

var socket = newSocket(AF_UNIX,SOCK_SEQPACKET, IPPROTO_IP)
connectUnix(socket, "temp.sock")
socket.send("!")
echo socket.recvJson()

So, what is the failure condition? Its on the nim side. it reads into a buffer, which allocates the size of the packet length. This size is random, because packet segments are lost. Causing an "out of memory" error. If you were to tweak the JS to do this

  let buffers = splitBuffer(base64, 1000)

In that case, it works fine

@oyyd
Copy link
Member

oyyd commented Mar 10, 2025

Thanks for the example. It seems the message sizes depend on returned contents so it could be greater than the example.

According to these:

Can you check your system config of sysctl_wmem_default? like reading the value of /proc/sys/net/core/wmem_max file. Looks like it defines the maximum size of messages/packets.

For me, if I reduce the value into 13000, my previous example would failed with errors like:

Error: Message too long

Yet, it's not silently failed. But I guess they are interrelated. Anyway, seems split the buffer like in your code is necessary even though you can increase the limit.

@albassort
Copy link
Author

/proc/sys/net/core/wmem_max

212992

Now, if the packets are that big, the message is too long, and that is the hard failure. But, I'm not sure if the OS is supposed to drop segments of the packets if its below this size, which it is doing, silently.

@oyyd
Copy link
Member

oyyd commented Mar 11, 2025

The (default) value seems good. Though, I have no clue about the issue now.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants