-
Notifications
You must be signed in to change notification settings - Fork 7
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
System overload when there are too many OIDs #39
Comments
Backup plugin Greatest bottleneck is log , since I have 137.000 entries, and so it does 2x 137.000 entries (fetch user and objects). select
log.id, lobj.object, lusr.username
FROM `oidplus_log` log
LEFT JOIN `oidplus_log_user` lusr on lusr.log_id = log.id
LEFT JOIN `oidplus_log_object` lobj on lobj.log_id = log.id
WHERE log.id > 136951 and log.id < 137000 WIP: ===================================================================
--- OIDplusPageAdminDatabaseBackup.class.php (Revision 1446)
+++ OIDplusPageAdminDatabaseBackup.class.php (Arbeitskopie)
@@ -335,6 +335,49 @@
// Backup logs (Tables log, log_object, log_user)
$log = [];
if ($export_log) {
+
+ $log_tmp = array();
+
+ $res = OIDplus::db()->query("SELECT log.id as _log_id, log.unix_ts, log.addr, log.event, ".
+ " lobj.object, lobj.severity as obj_severity, ".
+ " lusr.username, lusr.severity as usr_severity ".
+ "FROM ###log log ".
+ "LEFT JOIN ###log_user lusr on lusr.log_id = log.id ".
+ "LEFT JOIN ###log_object lobj on lobj.log_id = log.id ");
+ while ($row = $res->fetch_array()) {
+ $id = $row['_log_id'];
+ if (!isset($log_tmp[$id])) {
+ $num_rows["log"]++;
+ $log_tmp[$id] = [
+ "unix_ts" => $row["unix_ts"],
+ "addr" => $row["addr"],
+ "event" => $row["event"],
+ "objects" => [],
+ "users" => []
+ ];
+ }
+ if (($row['object'] ?? '') != '') {
+ $num_rows["log_object"]++;
+ $log_tmp[$id]['objects'][] = [
+ "object" => $row['object'],
+ "severity" => $row['obj_severity'] ?? 0
+ ];
+ }
+ if (($row['username'] ?? '') != '') {
+ $num_rows["log_user"]++;
+ $log_tmp[$id]['users'][] = [
+ "username" => $row['username'],
+ "severity" => $row['usr_severity'] ?? 0
+ ];
+ }
+ }
+
+ foreach ($log_tmp as $log_id => $data) {
+ $log[] = $data;
+ }
+ unset($log_tmp);
+
+ /*
$res = OIDplus::db()->query("select * from ###log order by id");
$rows = [];
while ($row = $res->fetch_array()) {
@@ -372,6 +415,7 @@
"users" => $log_users
];
}
+ */
}
// Backup public/private key But this does not work for me, because this query loads forever until I get timeout after 5 minutes!!! SELECT log.id as _log_id, log.unix_ts, log.addr, log.event, lobj.object, lobj.severity as obj_severity, lusr.username, lusr.severity as usr_severity
FROM oidplus_log log
LEFT JOIN oidplus_log_user lusr on lusr.log_id = log.id
LEFT JOIN oidplus_log_object lobj on lobj.log_id = log.id Note that this query has the disadvantage that Same thing to OIDs , where you need to fetch an ASN1ID and IRI for each OID! ... maybe log entries should be pruned? (maybe even prune by severity?) maybe save as TXT before pruning... ... if backup fails, will restore then also fail? |
The whole OIDplus system is a mess when there are 56.000 OIDs. |
Hello Daniel, P.S.: Bitte entschuldige die Verspätung, war (auch offline) ziemlich beschäftigt... |
Unfortunately, the system is even slow without AltID plugin |
Taken from the current TODO file:
See also frdl/oidplus-plugin-alternate-id-tracking#17 for the Alt-ID plugins.
In addition, it turns out that the backup plugin cannot be used. It was slow all the time, but now it is so slow that it throws a HTTP 500....
The text was updated successfully, but these errors were encountered: