-
-
Notifications
You must be signed in to change notification settings - Fork 0
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
feat: Performance improvement #31
Conversation
Approx. 4x faster than the previous implementation. benchmark time (avg) iter/s (min … max) p75 p99 p995 ----------------------------------------------------------------------- ----------------------------- Semaphore 78.77 µs/iter 12,695.7 (61.62 µs … 1.81 ms) 68.04 µs 607.38 µs 665.96 µs Semaphore (1.0.2) 296.69 µs/iter 3,370.6 (260.5 µs … 2.24 ms) 279.54 µs 635.88 µs 687.54 µs
Approx. 1.5x faster than the previous implementation. benchmark time (avg) iter/s (min … max) p75 p99 p995 ----------------------------------------------------------------------- ----------------------------- Mutex 452.41 ns/iter 2,210,370.0 (439.27 ns … 545.97 ns) 446.38 ns 537.83 ns 545.97 ns Mutex (1.0.2) 666.79 ns/iter 1,499,722.7 (636.54 ns … 947.56 ns) 653.95 ns 947.56 ns 947.56 ns Mutex (Issue #30) 435.85 ns/iter 2,294,381.9 (428.7 ns … 458.39 ns) 435.39 ns 457.76 ns 458.39 ns
Codecov ReportAttention: Patch coverage is
Additional details and impacted files@@ Coverage Diff @@
## main #31 +/- ##
==========================================
+ Coverage 91.76% 92.00% +0.23%
==========================================
Files 11 11
Lines 340 325 -15
Branches 41 41
==========================================
- Hits 312 299 -13
+ Misses 28 26 -2 ☔ View full report in Codecov by Sentry. |
It's not entirely clear what Object.assign is used for. Explain to me why to bind the disposing methods to the promise itself? return Object.assign(Promise.resolve(disposable), disposable);
return Object.assign(waiter.promise.then(() => disposable), disposable); |
Also, I would recommend using under the hood of the module only: lock[Symbol.dispose]() And give customers the opportunity to manually release const sem = new Semaphore(1);
{
const lock = await sem.acquire();
try {
// do ...
} finally {
lock[Symbol.dispose](); // ot sem.release();
}
}
{
const lock = await sem.acquire();
return asyncTask().finally(() => lock[Symbol.dispose]); // or sem.release();
} You also need to write all possible tests and benchmarks |
Also Denon.bench does not run tasks competitively, which means that the semaphore will always be unlocked and will not put the expectant in the Set You also need to make benchmarks for competitive access to acquire, so that the expectations are exactly put in the Set |
Semaphore - #30 (comment) While there is no native using feature in v8, you should not use it in places where it can harm performance and in libraries import { Semaphore } from './semaphore.ts';
const sem = new Semaphore(1);
Deno.bench({
name: 'relesae with using',
fn: async () => {
using _lock = await sem.acquire();
},
group: 'semaphore acquire'
});
Deno.bench({
name: 'relesae with using (acquireWithSignal)',
fn: async () => {
using _lock = await sem.acquireWithSignal();
},
group: 'semaphore acquire'
});
Deno.bench({
name: 'relesae with [Symbol.dispose]',
fn: async () => {
const lock = await sem.acquire();
try {
// do...
} finally {
lock[Symbol.dispose]()
}
},
group: 'semaphore acquire'
});
Deno.bench({
name: 'relesae with [Symbol.dispose] (acquireWithSignal)',
fn: async () => {
const lock = await sem.acquireWithSignal();
try {
// do...
} finally {
lock[Symbol.dispose]()
}
},
group: 'semaphore acquire'
});
Deno.bench({
name: 'relesae with using',
fn: async () => {
const task = async () => {
using _lock = await sem.acquire();
}
await Promise.all([task(), task(), task(), task(), task()])
},
group: 'semaphore concurents tasks (5)'
});
Deno.bench({
name: 'relesae with using (acquireWithSignal)',
fn: async () => {
const task = async () => {
using _lock = await sem.acquireWithSignal();
}
await Promise.all([task(), task(), task(), task(), task()])
},
group: 'semaphore concurents tasks (5)'
});
Deno.bench({
name: 'relesae with [Symbol.dispose]',
fn: async () => {
const task = async () => {
const lock = await sem.acquire();
try {
// do ...
} finally {
lock[Symbol.dispose]()
}
}
await Promise.all([task(), task(), task(), task(), task()])
},
group: 'semaphore concurents tasks (5)'
});
Deno.bench({
name: 'relesae with [Symbol.dispose] (acquireWithSignal)',
fn: async () => {
const task = async () => {
const lock = await sem.acquireWithSignal();
try {
// do ...
} finally {
lock[Symbol.dispose]()
}
}
await Promise.all([task(), task(), task(), task(), task()])
},
group: 'semaphore concurents tasks (5)'
});
### benchmarks
|
The semaphore implementation on Set will work in much the same way as on Deque, as long as there are few elements. As soon as there are a lot of expectations, there is a significant performance drawdown, since Set hashes the element itself when adding and searching for an element, and Deque immediately gives the first element
|
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
LGTM except for the double-release issue.
* | ||
* @returns A Promise with Disposable that releases the mutex when disposed. | ||
*/ | ||
acquire(): Promise<Disposable> & Disposable { |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
so this seems to enable an interesting error case — it is possible to release the same lock twice.
using _pending = sem.acquire(); // will dispose the disposable promise
// some stuff
using _lock = await pending; // the inner lock will *also* be disposed
Now, that seems unusual and I don't think people should do it, but making both the promise value and the promise itself Disposable
allows for it. ISTM that the intent here is to allow a pending lock attempt to be cancelled, but double-release protection of some kind is needed (or "cancel" and "release" should be different actions).
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
It is not clear why such a design and order is needed at all
What's the problem with using it like this?
Yes, and this does not solve the problem of unnecessary calls using
If you need protection against the repeated return of a single lock, then you only need to set the Disposable state to the object
import { scheduler } from 'node:timers/promises';
import { Semaphore } from "./semaphore.ts";
const sem = new Semaphore(1);
async function doSome(obj: {text: string}) {
const pendingLock = sem.acquire();
using lock = await pendingLock;
// do task ...
await scheduler.wait(1000);
using _lock = await pendingLock;
//maybe throw
obj.text += 'hello '
using _lock2 = await pendingLock;
//maybe throw
obj.text += 'world'
return obj;
}
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
And I misunderstood the question
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I think you can do it like this
acquire(): Promise<Disposable> {
const disposable = {
[kDisposed]: false,
release: () => this.release(),
[Symbol.dispose]() {
if (!this[kDisposed]) {
console.log('release');
this.release();
this[kDisposed] = true;
}
},
};
if (this.#value > 0) {
this.#value--;
return Promise.resolve(disposable);
}
const waiter = Promise.withResolvers<void>();
this.#waiters.add(waiter);
return waiter.promise.then(() => disposable);
}
The current |
I see. I haven't considered that. It seems Node / Bun already support |
No runtime has native support for using, check state tc39 yet https://github.com/tc39/proposal-explicit-resource-management At the moment, the conversion of using to a js implementation is being used, which is not very optimized. For more than a year, runtimes have been preparing for the appearance of using in native form in v8. I took measurements on all runtimes of Bun, Deno and Node(tsx), everywhere using led to a significant deterioration in performance |
Look at what it turns into when you build source: const disposable = {
data: '123',
[Symbol.dispose](){
console.log('disposed')
}
}
{
using a = disposable;
console.log(a);
} bundle: function _using_ctx() {
var _disposeSuppressedError = typeof SuppressedError === "function" ? SuppressedError : function(error, suppressed) {
var err = new Error();
err.name = "SuppressedError";
err.suppressed = suppressed;
err.error = error;
return err;
}, empty = {}, stack = [];
function using(isAwait, value) {
if (value != null) {
if (Object(value) !== value) {
throw new TypeError("using declarations can only be used with objects, functions, null, or undefined.");
}
if (isAwait) {
var dispose = value[Symbol.asyncDispose || Symbol.for("Symbol.asyncDispose")];
}
if (dispose == null) {
dispose = value[Symbol.dispose || Symbol.for("Symbol.dispose")];
}
if (typeof dispose !== "function") {
throw new TypeError(`Property [Symbol.dispose] is not a function.`);
}
stack.push({
v: value,
d: dispose,
a: isAwait
});
} else if (isAwait) {
stack.push({
d: value,
a: isAwait
});
}
return value;
}
return {
e: empty,
u: using.bind(null, false),
a: using.bind(null, true),
d: function() {
var error = this.e;
function next() {
while(resource = stack.pop()){
try {
var resource, disposalResult = resource.d && resource.d.call(resource.v);
if (resource.a) {
return Promise.resolve(disposalResult).then(next, err);
}
} catch (e) {
return err(e);
}
}
if (error !== empty) throw error;
}
function err(e) {
error = error !== empty ? new _disposeSuppressedError(error, e) : e;
return next();
}
return next();
}
};
}
const disposable = {
data: '123',
[Symbol.dispose] () {
console.log('disposed');
}
};
{
try {
var _usingCtx = _using_ctx();
const a = _usingCtx.u(disposable);
console.log(a);
} catch (_) {
_usingCtx.e = _;
} finally{
_usingCtx.d();
}
} |
There's no point arguing, make benchmarks for both versions and you'll see for yourself. |
I’ve already decided to remove I’m aware that |
OK, you don't want to open the method outside, then you can look at this option. Example usage const sem = new Semaphore(1);
// release with using
{
using _lock = await sem.acquire();
}
// release with lock.release
{
const lock = await sem.acquire();
try {
//do
} finally {
lock.release();
}
}
// release with lock[Symbol.dispose]
{
const lock = await sem.acquire();
try {
//do
} finally {
lock[Symbol.dispose]();
}
}
//
{
const lockPromise = sem.acquire();
try {
// do
const lock = await lockPromise;
await asyncTask().finally(() => lock.release());
} finally {
await lockPromise.then((lock) => lock.release());
}
} export class Semaphore {
#waiters = new Set<PromiseWithResolvers<void>>();
#value: number;
get locked() {
return this.#value === 0;
}
get waiters() {
return this.#waiters.size;
}
constructor(value: number) {
this.#value = value;
}
acquire(): Promise<{ release: () => void } & Disposable> {
let disposed = false;
const release = () => {
if (!disposed) {
this.#release();
disposed = true;
}
}
const disposable = {
release: release,
[Symbol.dispose]: release,
};
if (this.#value > 0) {
this.#value--;
return Promise.resolve(disposable);
}
const waiter = Promise.withResolvers<void>();
this.#waiters.add(waiter);
return waiter.promise.then(() => disposable);
}
#release() {
if (this.#waiters.size > 0) {
const waiters = this.#waiters;
const [waiter] = waiters.keys();
waiters.delete(waiter);
waiter.resolve();
} else {
this.#value++;
}
}
} |
I'll remove
diff --git a/lock.ts b/lock.ts
index bf49c90..2b5d955 100644
--- a/lock.ts
+++ b/lock.ts
@@ -43,7 +43,11 @@ export class Lock<T> {
* @returns A Promise that resolves with the result of the function.
*/
async lock<R>(fn: (value: T) => R | PromiseLike<R>): Promise<R> {
- using _lock = await this.#mu.acquire();
- return await fn(this.#value);
+ const lock = await this.#mu.acquire();
+ try {
+ return await fn(this.#value);
+ } finally {
+ lock[Symbol.dispose]();
+ }
}
} |
I do not know what you have done there, but using cannot work faster than explicitly calling function getDisposable(): Promise<Disposable> {
const disposable = {
[Symbol.dispose]() {},
};
return Promise.resolve(disposable);
}
Deno.bench({name: "using", fn: async () => {
using _lock = await getDisposable();
}});
Deno.bench({name: "Symbol.dispose", fn: async () => {
const _lock = await getDisposable();
_lock[Symbol.dispose]();
}})
Deno.bench({name: "try/finally Symbol.dispose", fn: async () => {
const _lock = await getDisposable();
try {
//
} finally {
_lock[Symbol.dispose]();
}
}})
|
I measured lock on my implementation and my version wins by 10-15% than the implementation using using or try/finally async lock<R>(fn: () => R | PromiseLike<R>): Promise<R> {
const lock = await this.acquire();
return await Promise.resolve(fn()).finally(() => lock[Symbol.dispose]());
}
async lockTry<R>(fn: () => R | PromiseLike<R>): Promise<R> {
const lock = await this.acquire();
try {
return await fn();
} finally {
lock.release();
}
}
async usingLock<R>(fn: () => R | PromiseLike<R>): Promise<R> {
using _lock = await this.acquire();
return await fn()
}
|
It seems `using` in Node is not as performant as expected. Note that the performance in Deno seems not to be affected by this change. benchmark time (avg) iter/s (min … max) p75 p99 p995 --------------------------------------------------------------- ----------------------------- group Lock#lock v1.0.0 52.84 ms/iter 18.9 (45.86 ms … 57.58 ms) 55.68 ms 57.58 ms 57.58 ms main 53.55 ms/iter 18.7 (47.32 ms … 73.59 ms) 55.52 ms 73.59 ms 73.59 ms summary v1.0.0 1.01x faster than main See #31 for detail
I re-implemented and continued on #34. Please give me comments on that PR @PandaWorker @mdekstrand |
It seems `using` in Node is not as performant as expected. Note that the performance in Deno seems not to be affected by this change. benchmark time (avg) iter/s (min … max) p75 p99 p995 --------------------------------------------------------------- ----------------------------- group Lock#lock v1.0.0 52.84 ms/iter 18.9 (45.86 ms … 57.58 ms) 55.68 ms 57.58 ms 57.58 ms main 53.55 ms/iter 18.7 (47.32 ms … 73.59 ms) 55.52 ms 73.59 ms 73.59 ms summary v1.0.0 1.01x faster than main See #31 for detail
It seems `using` in Node is not as performant as expected. Note that the performance in Deno seems not to be affected by this change. benchmark time (avg) iter/s (min … max) p75 p99 p995 --------------------------------------------------------------- ----------------------------- group Lock#lock v1.0.0 52.84 ms/iter 18.9 (45.86 ms … 57.58 ms) 55.68 ms 57.58 ms 57.58 ms main 53.55 ms/iter 18.7 (47.32 ms … 73.59 ms) 55.52 ms 73.59 ms 73.59 ms summary v1.0.0 1.01x faster than main See #31 for detail
It seems `using` in Node is not as performant as expected. Note that the performance in Deno seems not to be affected by this change. benchmark time (avg) iter/s (min … max) p75 p99 p995 --------------------------------------------------------------- ----------------------------- group Lock#lock v1.0.0 52.84 ms/iter 18.9 (45.86 ms … 57.58 ms) 55.68 ms 57.58 ms 57.58 ms main 53.55 ms/iter 18.7 (47.32 ms … 73.59 ms) 55.52 ms 73.59 ms 73.59 ms summary v1.0.0 1.01x faster than main See #31 for detail
Performance improvement refactoring suggested in #30 are applied.
Thanks to @PandaWorker 🎉