Rust-Cdrs: cdrs — native client written in Rust

CDRS crates.io version Build Status Build status

CDRS - Apache Cassandra driver

CDRS is Apache Cassandra driver written in pure Rust.

? Looking for an async version?

Features

  • TCP/SSL connection;
  • Load balancing;
  • Connection pooling;
  • LZ4, Snappy compression;
  • Cassandra-to-Rust data deserialization;
  • Pluggable authentication strategies;
  • ScyllaDB support;
  • Server events listening;
  • Multiple CQL version support (3, 4), full spec implementation;
  • Query tracing information.

Documentation and examples

Getting started

Add CDRS to your Cargo.toml file as a dependency:

cdrs = { version = "2" }

Then add it as an external crate to your main.rs:

extern crate cdrs;

use cdrs::authenticators::NoneAuthenticator;
use cdrs::cluster::session::{new as new_session};
use cdrs::cluster::{ClusterTcpConfig, NodeTcpConfigBuilder};
use cdrs::load_balancing::RoundRobin;
use cdrs::query::*;

fn main() {
  let node = NodeTcpConfigBuilder::new("127.0.0.1:9042", NoneAuthenticator {}).build();
  let cluster_config = ClusterTcpConfig(vec![node]);
  let no_compression =
    new_session(&cluster_config, RoundRobin::new()).expect("session should be created");

  let create_ks: &'static str = "CREATE KEYSPACE IF NOT EXISTS test_ks WITH REPLICATION = { \
                                 'class' : 'SimpleStrategy', 'replication_factor' : 1 };";
  no_compression.query(create_ks).expect("Keyspace create error");
}

This example configures a cluster consisting of a single node, and uses round robin load balancing and default r2d2 values for connection pool.

License

This project is licensed under either of

at your option.

Comments

  • Document rustls feature
    Document rustls feature

    Apr 18, 2020

    Thanks to @DoumanAsh a support of rustls was added. Now we need to document it:

    • provide an example
    • add a subpage to https://github.com/AlexPikalov/cdrs/tree/master/documentation
    documentation easy-to-start help wanted 
    Reply
  • rust-tls feature seemingly unusable
    rust-tls feature seemingly unusable

    May 16, 2020

    Hi! First off, thank you for a great library. I saw that support for rustls was recently added to cdrs - however, it seems that a crucial part is missing to be able to use it. Unless I'm overlooking anything, it's currently not actually possible to construct a session to connect using it, as there's no new_rusttls (or similar) method available in session.rs to use the RustlsConnectionPool. Is there any way to use this or does such a method need to be added?

    Reply
  • Parameterized page queries
    Parameterized page queries

    May 27, 2020

    I want to execute a page query with a paramaterized query, however I don't see a way to do it since it expects a String.

    This is the relevant code: https://github.com/AlexPikalov/cdrs/blob/07f7cf5a475bd18f1be596badff3a620d0a5c01e/src/cluster/pager.rs#L59

    Reply
  • Pager supporting QueryValues
    Pager supporting QueryValues

    Jun 5, 2020

    @AlexPikalov can you help me on the TODO comment in the test? It fails for a vec when using an in parameter, but I can not figure it out why. This solves https://github.com/AlexPikalov/cdrs/issues/336.

    Reply
  • Derive implementations of Debug for various types, and derive IntoRustByIndex for Vec<Timespec>
    Derive implementations of Debug for various types, and derive IntoRustByIndex for Vec

    Jun 9, 2020

    ^ again, title.

                                                                                                                                                                                                           
    Reply
  • How to query with a vec of binary keys?
    How to query with a vec of binary keys?

    Jun 15, 2020

    I have a vec of binary keys (Vec<Vec<u8>>), and I want to provide them as a query value in the following select statement:

    SELECT value FROM mycollection WHERE key IN ?
    

    I think I'm getting tripped up on encoding with query_values! here. I know how to do the single-key case, by wrapping the key in cdrs::types::value::Bytes::new(mykey). But, I don't understand how to do that with a Vec of bytes keys. One issue I've run into is that I can get the conversions to compile, but queries don't work as expected -- that's why I think I don't understand the encoding process.

    Edit:

    Here's what does work, for a single key lookup:

    let key: Vec<u8> = ...;
    let qv = cdrs::query_values!(cdrs::types::value::Bytes::new(key));
    let _ = conn.query_with_values(r#"SELECT value FROM mycollection WHERE key = ?"#, qv);
    

    Here's what I tried for a vec of keys, that compiles but I think produces a wrong encoding:

    let keys: Vec<Vec<u8>> = ...;
    let qv = cdrs::query_values!(keys);
    let _ = conn.query_with_values(r#"SELECT value FROM mycollection WHERE key IN ?"#, qv);
    

    Here's my schema:

    CREATE TABLE IF NOT EXISTS mykeyspace.mycollection (key blob PRIMARY KEY, value blob)
    

    Thanks!

    cc @AlexPikalov

    Reply
  • Add blob example
    Add blob example

    Mar 1, 2017

    It took me a while to figure out that calling into() on a Vec<u8> in order to convert it into Bytes doesn't work as I expected. impl<T: Into<Bytes> + Clone + Debug> From<Vec<T>> for Bytes exists but I'm not sure what it does exactly.

    Reply
  • updated api to async
    updated api to async

    Mar 25, 2020

    This PR updates whole api to async based on tokio.

    Reply
  • Adapt examples to ssl feature
    Adapt examples to ssl feature

    Feb 5, 2017

    Reported in #61

    When --all-features was used, the ssl transport was provided via the transport_ssl module. However, the examples were not prepared for this, and cargo test --all-features would fail.

    As both transports are mutually exclusive, it seems reasonable to just use the same module name and make life easier for clients.

    • [x] fix existing examples
    • [x] add examples which would use ssl
    Reply
  • cargo fmt is done
    cargo fmt is done

    Feb 8, 2017

    have run cargo fmt for all the files and has been incorporated into travis as well. so if we were to push file with a different style travis won't let you build it.

    before pushing run cargo fmt -- --write-mode=diff

    if there is anything coming on the console try cargo fmt -- --write-mode=overwrite cargo test

    then push else build would fail

    Reply
  • Proper error handling in as_rust! and new macro for IntoRustByName
    Proper error handling in as_rust! and new macro for IntoRustByName

    Mar 3, 2017

    1. "Return" Results from as_rust! instead of using unwrap() and unreachable!.

    2. Implement IntoRustByName<_> for Row using a new macro row_into_rust_by_name!. row_into_rust_by_name! uses as_rust! and the implementations change a bit, although I think the new behavior is more accurate. But it still leaves a problem. get_by_name() returns Option<Result<_>>. None if the column can't be found and Some(Err(_)) if the conversion fails. It also returns a custom error if the value is empty (except for blobs) and it doesn't handle null values explicitly. It would be nice if an Option could be returned, that is None in case of a null value. Maybe we could change the return type to Result<Option<_>> and return an error if the column can't be found. I'm not sure what the use case for the current implementation is, is it a common use case to look for a column that might not be there?

    Reply
  • Epoch timestamps are being modified by the driver before being stored in Cassandra DB
    Epoch timestamps are being modified by the driver before being stored in Cassandra DB

    Dec 26, 2018

    For example:

    suppose that there is a table in Cassandra called tableA and it has two columns: ID column and timestamp column.

    id is INT timestamp is DECIMAL

    When executing those lines of code:

    let update_struct_cql: String = format!("update tableA SET  timestamp = ? where id = ?;");
    
    ddb.query_with_values(update_struct_cql,query_values!(current_time, id)).expect("[Err]: Could not update table");  
    

    When I print out timestamp BEFORE it is being used in above query, it will print something normal, like 1545870134, however, when I check column value in Cassandra DB, I see something like:

    6.1805539111220E-825570344

    The question is: What happened to the timestamp ?

    bug question 
    Reply