Replies: 12 comments 28 replies
-
@robertsLando
Which carriers do you want to support ?
I see quite some config options to tune the stack, what options do you want to keep? Do you want to support MQTTv5? (I personally think its a bad move from architectural perspective as part of the application layer seems to be incorporated in the transport layer ,e.g. properties) Then some other notes:
I'm happy to contribute code from Opifex if that helps.
Hope this helps! Kind regards, |
Beta Was this translation helpful? Give feedback.
-
OK, then I guess Opifex will get MQTTv5 support as well, at least client side, and maybe also browser support if I can do that without external dependencies 😄 What do you suggest we do next? Kind regards, |
Beta Was this translation helpful? Give feedback.
-
Ok, I did some benchmarking on the generation of packets using: https://github.com/mqttjs/mqtt-packet/tree/master/benchmarks/generate.js The results were quite variable on my codespace so I jacked up the max to 10.000.000 The results were as follows: mqtt-packetTotal time 7213 opifexTotal time 12362 1.3M packets per second vs 0.8M packets per second, so mqtt-packet is about 50% faster than Opifex. If you have any ideas then I'm curious to learn! The source is in: And as generate.js for Opifex I used: const mqtt = require('@seriousme/opifex/mqttPacket')
const max = 10000000
let i
const buf = Buffer.from('test')
const PUBLISH = 3
// initialize it
mqtt.encode({
type: PUBLISH,
topic: 'test',
payload: buf
})
const start = Date.now()
for (i = 0; i < max; i++) {
mqtt.encode({
type: PUBLISH,
topic: 'test',
payload: buf
})
}
const time = Date.now() - start
console.log('Total time', time)
console.log('Total packets', max)
console.log('Packet/s', max / time * 1000) I will have a look at parsing performance as well as I'm curious how that plays out! Kind regards, |
Beta Was this translation helpful? Give feedback.
-
Interestingly parsing gives a different picture. mqtt-packetTotal packets 10000000 OpifexTotal packets 10000000 Here mqtt-packet does about 0.7M packets per second where Opifex is doing 2.1M packets per second, about 3 times as much. The code I used for Opifex was: const mqtt = require('@seriousme/opifex/mqttPacket')
const max = 10000000
let i
const start = Date.now() / 1000
for (i = 0; i < max; i++) {
mqtt.decode(Buffer.from([
48, 10, // Header (publish)
0, 4, // Topic length
116, 101, 115, 116, // Topic (test)
116, 101, 115, 116 // Payload (test)
]))
}
const time = Date.now() / 1000 - start
console.log('Total packets', max)
console.log('Total time', Math.round(time * 100) / 100)
console.log('Packet/s', max / time) Btw: this is NodeJS 24.1, the previous generate test was NodeJS 22.16 mqtt-packet generate.jsTotal time 9523 Opifex generate.jsTotal time 12806 And somehow mqtt-packet got a bit slower and Opifex got slightly faster 🤔 KInd regards, |
Beta Was this translation helpful? Give feedback.
-
Btw: I did some analysis on the generate.js using mqtt-packet[Summary]:
opifex[Summary]:
With Opifex spending most of its C++ on:
|
Beta Was this translation helpful? Give feedback.
-
Ok, I just released Opifex 1.9.4 which replaces: return Uint8Array.from([
(packetType << 4) | flags,
...encodeLength(bytes.length),
...bytes,
]); by: const encodedLength = encodeLength(bytes.length);
const buflenght = 1 + encodedLength.length + bytes.length;
const buffer = new Uint8Array(buflenght);
let offset = 0;
buffer[offset++] = (packetType << 4) | flags;
for (let i = 0; i < encodedLength.length; i++) {
buffer[offset++] = encodedLength[i];
}
for (let i = 0; i < bytes.length; i++) {
buffer[offset++] = bytes[i];
}
return buffer;
} Which avoids a number of copy rounds as there is no destructure anymore and Uint8Array.from also does some cyping internally apparently. I need to run another test on codespace, but a local test was already more performant. Kind regards, |
Beta Was this translation helpful? Give feedback.
-
Ok, I also noticed that the Opifex encoder performs more checks. If you really want top notch performance then the most optimal encoder I can think of does the following:
You could even keep record on what percentage of encodings does buffer.concat() and use that to tune X. One step even further would be to allow the user to generate a packet , given topic, and payloadsize and then generate then packet once and offer a method which only overwrites the payload and maybe the packet number (for QoS > 0) For Opifex I like the current way of checking everything for validity, but I can imagine you might want to make other choices. Hope this helps, |
Beta Was this translation helpful? Give feedback.
-
I checked a bit more and the mqtt-packet test does call concat to create the Buffer, so we were comparing apples to apples yesterday. So where does the difference come from? I set iterations per round to 1.000.000 (1/10 of yesterday) to speed things up a bit. I modified pubPacket to allocate just the amount of bytes required for the packet instead of 1K, that alreaday makes quite a difference: allocate exact packet size instead of 1K
Then I removed the .subArray() removed
And for good measure I also changed Buffer.allocUnsafe()
As you can see the numbers are still a bit variable across rounds. But I think that In general we can conclude that:
Hope this helps. |
Beta Was this translation helpful? Give feedback.
-
@mcollina , @robertsLando It uses:
Hope this helps! Kind regards, |
Beta Was this translation helpful? Give feedback.
-
I just tried encodeInto() this morning and added it to generateAll() in https://github.com/seriousme/mqtt-packet/tree/benchmark-publish-packet The results were predictable, it is by far the slowest solution.
It seems like more people are complaining about TextEncoder/TextDecoder performance: whatwg/encoding#343 Kind regards, |
Beta Was this translation helpful? Give feedback.
-
I think that we both agree that for high speed data processing:
What I learned from the whole experience is that:
Kind regards, |
Beta Was this translation helpful? Give feedback.
-
I tried to summarize everything in: #2002 let's continue our discussion there |
Beta Was this translation helpful? Give feedback.
Uh oh!
There was an error while loading. Please reload this page.
-
@robertsLando
As discussed in moscajs/aedes-persistence#105 (comment) discuss how to improve.
Beta Was this translation helpful? Give feedback.
All reactions