You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Likely. All structural mutation operations should be very fast. And we can
allocate before we lock. Allocation is much of the time unless we have a
lightweight pool.
We generally do something more like ROWEX and I can help work through what
that might look like as we get further along, though it offered no benefit
in their testing.
Templating the lock strategy and the key structure would both seem to
increase flexibility.
Bryan
You mentioned an issue with the use of _mm_pause in the spin lock implementation here. We've been using a variant of this spin lock
// spin_lock code is taken from: https://rigtorp.se/spinlock/
| | // modified to handle ARM
| | struct spin_lock_t {
| | std::atomic lock_ = {0};
| |
| | void lock() noexcept {
| | for (;;) {
| | // Optimistically assume the lock is free on the first try
| | if (!lock_.exchange(true, std::memory_order_acquire)) {
| | return;
| | }
| | // Wait for lock to be released without generating cache misses
| | while (lock_.load(std::memory_order_relaxed)) {
| | cpu_acquiesce();
| | }
| | }
| | }
| |
| | bool try_lock() noexcept {
| | // First do a relaxed load to check if lock is free in order to prevent
| | // unnecessary cache misses if someone does while(!try_lock())
| | return !lock_.load(std::memory_order_relaxed) &&
| | !lock_.exchange(true, std::memory_order_acquire);
| | }
| |
| | void unlock() noexcept {
| | lock_.store(false, std::memory_order_release);
| | }
| | }; // spin_lock_t
The text was updated successfully, but these errors were encountered: