Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Add integration tests for query_unpaged and execute_unpaged APIs #1228

Open
wants to merge 1 commit into
base: branch-hackathon
Choose a base branch
from
Open
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
166 changes: 165 additions & 1 deletion scylla/tests/integration/query_result.rs
Original file line number Diff line number Diff line change
@@ -1,10 +1,11 @@
use assert_matches::assert_matches;
use futures::TryStreamExt;
use scylla::errors::PagerExecutionError;
use scylla::errors::{ExecutionError, PagerExecutionError};
use scylla::{
batch::{Batch, BatchType},
client::session::Session,
query::Query,
response::query_result::QueryResult,
};
use scylla_cql::frame::request::query::{PagingState, PagingStateResponse};

Expand Down Expand Up @@ -295,3 +296,166 @@ async fn execute_iter_serialization_error() {
Err(PagerExecutionError::SerializationError(_))
)
}

async fn create_session(table_name: &str) -> Session {
let session: Session = create_new_session_builder().build().await.unwrap();
let ks = unique_keyspace_name();

let cql_create_ks = format!(
"CREATE KEYSPACE IF NOT EXISTS {} WITH REPLICATION =\
{{'class' : 'NetworkTopologyStrategy', 'replication_factor' : 1}}",
ks
);
session.ddl(cql_create_ks).await.unwrap();
session.use_keyspace(ks, false).await.unwrap();

let cql_create_table = format!(
"CREATE TABLE IF NOT EXISTS {} (id int PRIMARY KEY, val int)",
table_name,
);
session.ddl(cql_create_table).await.unwrap();

session
}

async fn check_query_unpaged(insert_num: u64, use_prepared_statements: bool) {
let table_name = if use_prepared_statements {
"execute_unpaged"
} else {
"query_unpaged"
};
let session: Session = create_session(table_name).await;

for i in 0..insert_num {
if use_prepared_statements {
let prepared_statement = session
.prepare(format!("INSERT INTO {}(id, val) VALUES (?, ?)", table_name))
.await
.unwrap();
session
.execute_unpaged(&prepared_statement, &vec![i as i32, i as i32])
.await
.unwrap();
} else {
let cql = format!("INSERT INTO {}(id, val) VALUES ({}, {})", table_name, i, i);
session.query_unpaged(cql, &[]).await.unwrap();
}
}

let query_result: QueryResult;
if use_prepared_statements {
let prepared_statement = session
.prepare(format!("SELECT * FROM {}", table_name))
.await
.unwrap();
query_result = session
.execute_unpaged(&prepared_statement, &[])
.await
.unwrap();
} else {
let select_query = Query::new(format!("SELECT * FROM {}", table_name)).with_page_size(5);
query_result = session.query_unpaged(select_query, &[]).await.unwrap();
}
let rows = query_result.into_rows_result().unwrap();

// NOTE: check rows number using 'rows_num()' method.
assert_eq!(rows.rows_num(), insert_num as usize);

// NOTE: check actual rows number.
let actual_rows = rows
.rows::<(i32, i32)>()
.unwrap()
.collect::<Result<Vec<_>, _>>()
.unwrap();
assert_eq!(actual_rows.len(), insert_num as usize);
}

async fn unpaged_error(use_prepared_statements: bool) {
let table_name = if use_prepared_statements {
"execute_unpaged"
} else {
"query_unpaged"
};
let session: Session = create_session(table_name).await;

let query_result: Result<QueryResult, ExecutionError>;
if use_prepared_statements {
let prepared_statement = session
.prepare(format!("SELECT * FROM {}", table_name))
.await
.unwrap();
// NOTE: drop table to make the main query return error
session
.ddl(format!("DROP TABLE IF EXISTS {}", table_name))
.await
.unwrap();
query_result = session.execute_unpaged(&prepared_statement, &[]).await;
} else {
let select_query = Query::new(format!("SELECT * FROM fake{}", table_name));
query_result = session.query_unpaged(select_query, &[]).await;
}
match query_result {
Ok(_) => panic!("Unexpected success"),
Err(err) => println!("Table not found as expected: {:?}", err),
}
}

#[tokio::test]
async fn test_query_unpaged_error() {
unpaged_error(false).await
}

#[tokio::test]
async fn test_execute_unpaged_error() {
unpaged_error(true).await
}

#[tokio::test]
async fn test_query_unpaged_no_rows() {
check_query_unpaged(0, false).await;
}

#[tokio::test]
async fn test_query_unpaged_one_row() {
check_query_unpaged(1, false).await;
}

#[tokio::test]
async fn test_query_unpaged_ten_rows() {
check_query_unpaged(10, false).await;
}

#[tokio::test]
async fn test_query_unpaged_hundred_rows() {
check_query_unpaged(100, false).await;
}

#[tokio::test]
async fn test_query_unpaged_thousand_rows() {
check_query_unpaged(1000, false).await;
}

#[tokio::test]
async fn test_execute_unpaged_no_rows() {
check_query_unpaged(0, true).await;
}

#[tokio::test]
async fn test_execute_unpaged_one_row() {
check_query_unpaged(1, true).await;
}

#[tokio::test]
async fn test_execute_unpaged_ten_rows() {
check_query_unpaged(10, true).await;
}

#[tokio::test]
async fn test_execute_unpaged_hundred_rows() {
check_query_unpaged(100, true).await;
}

#[tokio::test]
async fn test_execute_unpaged_thousand_rows() {
check_query_unpaged(1000, true).await;
}
Comment on lines +413 to +461
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Those tests could be merged into one test (or two - one for prepared, one for unprepared), in order to re-use the created session and table.

Copy link
Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

All the 23 tests in this module take about 5 seconds not considering the compilation time.

Current approach is the test isolation, reusing session and tables we create side-effects.
Having 5 seconds for 23 tests I don't think that the gained test run time worth the side-effects.

Then, existing 11 tests follow the same approach - isolated logic. Which, I think, is proper approach for "tests".

Copy link
Collaborator

@Lorak-mmk Lorak-mmk Feb 12, 2025

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

5 seconds of execution time is not short.
I know that writing them as one tests is a bit more involved (e.g. you need to DELETE stuff from the table between test cases) but I think its still worth it to speed things up.

Copy link
Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

It is not just some delete's or truncate's, it is losing test isolation.
Also, delete and truncate calls take some time too.
Adding new tests we should not bother ourselves in which order tests run and how they may fail in the middle and cause other tests to fail due to unexpected DB table state.
I still value the test isolation more than the really small possible time gain.

Also, note that 5 seconds is taken by all the tests in this module including existing ones.

Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

If some test case fails in the middle, then just report failure without running other test cases.
DELETE will take time, but much less time that creating keyspace, table, and session.
Test time is important. It directly affects iteration time for developers.

Copy link
Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Proper test isolation reduces the investigation time in case of failures.

So, looks like we need other opinions here to get a quorum.

Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Investigating failing tests has never been a problem in Rust Driver. Iteration time is something that constantly affects development speed.

Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I think we need to merge them, another reason is that there is chance for ddl statement to fail when ran in parralell due to Scylla bug, to make suit more stable we need reduce them as much as possible.

Loading