Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Add mutex to logger to avoid race condition #100

Closed
wants to merge 3 commits into from
Closed

Conversation

dyastremsky
Copy link
Contributor

@dyastremsky dyastremsky commented Sep 6, 2023

Our logger uses std::stringstream, which is not thread-safe. Adding a mutex should remove possible undefined behavior (e.g. overlapping logs and segfaults).

Per the C++ 17 standard, "Concurrent access to a stream object (30.8, 30.9), stream buffer object (30.6), or C Library stream (30.12) by multiple threads may result in a data race (6.8.2) unless otherwise specified (30.4). [ Note: Data races result in undefined behavior (6.8.2). — end note ] – [iostreams.threadsafety]." (Source: https://berthub.eu/articles/posts/iostreams-unexpected/; in addition, there is this 2020 copy of C++ standard under 29.2.3).

@dyastremsky dyastremsky marked this pull request as ready for review September 6, 2023 18:43
@kthui
Copy link
Contributor

kthui commented Sep 6, 2023

I think the LogMessage::LogMessage() is a constructor and the stream_ is local to the LogMessage class. I am wondering how stream_ can be accessed concurrently on the constructor?

@nv-kmcgill53
Copy link
Contributor

Are you able to reproduce this issue reliably enough to create a test out of it? Also is this something you ran into while developing or is there an issue related to this fix?

@dyastremsky
Copy link
Contributor Author

I think the LogMessage::LogMessage() is a constructor and the stream_ is local to the LogMessage class. I am wondering how stream_ can be accessed concurrently on the constructor?

It looks like whenever a message is logged, a new log message is created. So my concern was that writing to the stream would not be thread-safe, even with different streams. It's hard to find information on this online, so perhaps that's incorrect? That's why I pinged the team, as if this eliminates race conditions, this would be a great change to get in, but I want to make sure I'm not missing anything.

@dyastremsky
Copy link
Contributor Author

Are you able to reproduce this issue reliably enough to create a test out of it? Also is this something you ran into while developing or is there an issue related to this fix?

I can't reproduce the issue reliably. Let me provide some more context via Slack.

@nv-kmcgill53
Copy link
Contributor

It looks like whenever a message is logged, a new log message is created. So my concern was that writing to the stream would not be thread-safe, even with different streams

Why wouldn't this be ok? If two different LogMessage()s get called on two different threads then each one of the streams are on their own thread. What you are saying would be a problem if we had one stream_ which is referenced in both threads but since they are their own memory then this should be fine.

@dyastremsky
Copy link
Contributor Author

It looks like whenever a message is logged, a new log message is created. So my concern was that writing to the stream would not be thread-safe, even with different streams

Why wouldn't this be ok? If two different LogMessage()s get called on two different threads then each one of the streams are on their own thread. What you are saying would be a problem if we had one stream_ which is referenced in both threads but since they are their own memory then this should be fine.

Thank you for the feedback. It sounds like streams aren't global objects that get modified for each initialized stream. Appreciate the feedback from you and Kris here! Will close this out.

@dyastremsky dyastremsky closed this Sep 7, 2023
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Development

Successfully merging this pull request may close these issues.

3 participants