Illustration Image

Cassandra.Link

The best knowledge base on Apache Cassandra®

Helping platform leaders, architects, engineers, and operators build scalable real time data platforms.

4/20/2018

Reading time:1 min

jamesward/koober

by John Doe

README.md An uber data pipeline sample app. Play Framework, Akka Streams, Kafka, Flink, Spark Streaming, and Cassandra.Start Kafka:./sbt kafkaServer/runWeb App:Obtain an API key from mapbox.comStart the Play web app: MAPBOX_ACCESS_TOKEN=YOUR-MAPBOX-API-KEY ./sbt webapp/runTry it out:Open the driver UI: http://localhost:9000/driverOpen the rider UI: http://localhost:9000/riderIn the Rider UI, click on the map to position the riderIn the Driver UI, click on the rider to initiate a pickupStart Flink:./sbt flinkClient/runInitiate a few pickups and see the average pickup wait time change (in the stdout console for the Flink process)Start Cassandra:./sbt cassandraServer/runStart the Spark Streaming process:./sbt kafkaToCassandra/runWatch all of the ride data be micro-batched from Kafka to CassandraSetup PredictionIO Pipeline:Setup PIOSet the PIO Access Key: export PIO_ACCESS_KEY=<YOUR PIO ACCESS KEY>Start the PIO Pipeline: ./sbt pioClient/runCopy demo data into Kafka or PIO:For fake data, run:./sbt "demoData/run <kafka|pio> fake <number of records> <number of months> <number of clusters>"For New York data, run:./sbt "demoData/run <kafka|pio> ny <number of months> <sample rate>"Start the Demand DashboardPREDICTIONIO_URL=http://asdf.com MAPBOX_ACCESS_TOKEN=YOUR_MAPBOX_TOKEN ./sbt demandDashboard/run

Illustration Image

README.md

An uber data pipeline sample app. Play Framework, Akka Streams, Kafka, Flink, Spark Streaming, and Cassandra.

Start Kafka:

./sbt kafkaServer/run

Web App:

  1. Obtain an API key from mapbox.com
  2. Start the Play web app: MAPBOX_ACCESS_TOKEN=YOUR-MAPBOX-API-KEY ./sbt webapp/run

Try it out:

  1. Open the driver UI: http://localhost:9000/driver
  2. Open the rider UI: http://localhost:9000/rider
  3. In the Rider UI, click on the map to position the rider
  4. In the Driver UI, click on the rider to initiate a pickup

Start Flink:

  1. ./sbt flinkClient/run
  2. Initiate a few pickups and see the average pickup wait time change (in the stdout console for the Flink process)

Start Cassandra:

./sbt cassandraServer/run

Start the Spark Streaming process:

  1. ./sbt kafkaToCassandra/run
  2. Watch all of the ride data be micro-batched from Kafka to Cassandra

Setup PredictionIO Pipeline:

  1. Setup PIO

  2. Set the PIO Access Key:

     export PIO_ACCESS_KEY=<YOUR PIO ACCESS KEY>
    
  3. Start the PIO Pipeline:

     ./sbt pioClient/run
    

Copy demo data into Kafka or PIO:

For fake data, run:

./sbt "demoData/run <kafka|pio> fake <number of records> <number of months> <number of clusters>"

For New York data, run:

./sbt "demoData/run <kafka|pio> ny <number of months> <sample rate>"

Start the Demand Dashboard

PREDICTIONIO_URL=http://asdf.com MAPBOX_ACCESS_TOKEN=YOUR_MAPBOX_TOKEN ./sbt demandDashboard/run

Related Articles

sstable
cassandra
spark

Spark and Cassandra’s SSTable loader

Arunkumar

11/1/2024

Checkout Planet Cassandra

Claim Your Free Planet Cassandra Contributor T-shirt!

Make your contribution and score a FREE Planet Cassandra Contributor T-Shirt! 
We value our incredible Cassandra community, and we want to express our gratitude by sending an exclusive Planet Cassandra Contributor T-Shirt you can wear with pride.

Join Our Newsletter!

Sign up below to receive email updates and see what's going on with our company

Explore Related Topics

AllKafkaSparkScyllaSStableKubernetesApiGithubGraphQl

Explore Further

akka