diff --git a/README.markdown b/README.markdown
deleted file mode 100644
index 7e0169e..0000000
--- a/README.markdown
+++ /dev/null
@@ -1,49 +0,0 @@
-This is a Spark/Cassandra demo using the open-source **[Spark Cassandra Connector]**
-
-There are 2 packages with 2 distinct demos
-
-* _us.unemployment.demo_
-
- - Ingestion
-
- - **FromCSVToCassandra**: read US employment data from CSV file into Cassandra
- - **FromCSVCaseClassToCassandra**: read US employment data from CSV file, create case class and insert into Cassandra
-
-
- - Read
-
- - **FromCassandraToRow**: read US employment data from Cassandra into **CassandraRow** low-level object
- - **FromCassandraToCaseClass**: read US employment data from Cassandra into custom Scala case class, leveraging the built-in object mapper
-
-
-
-
-* _weather.data.demo_
-
- - Data preparation
-
- - Go to the folder main/data
- - Execute _$CASSANDRA_HOME/bin/cqlsh -f weather_data_schema.cql_ from this folder. It should create the keyspace **spark_demo** and some tables
- - Download the _Weather_Raw_Data_2014.csv.gz_ from **[here]**
- - Unzip it somewhere on your disk
-
-
- - Ingestion
-
- - **WeatherDataIntoCassandra**: read all the _Weather_Raw_Data_2014.csv_ file (30.106 lines) and insert the data into Cassandra.
- Please do not forget set the path to this file by changing the **WeatherDataIntoCassandra.WEATHER_2014_CSV** value
-
- This step should take a while since there are 30.106 lines to be inserted into Cassandra
-
- - Read
-
- - **WeatherDataFromCassandra**: read all raw weather data plus all weather stations details,
- filter the data by French station and take data only between March and June 2014.
- Then compute average on temperature and pressure
-
- This step should take a while since there are 30.106 lines to be read from Cassandra
-
-
-
-[Spark Cassandra Connector]: https://github.com/datastax/spark-cassandra-connector
-[here]: https://drive.google.com/a/datastax.com/file/d/0B6wR2aj4Cb6wOF95QUZmVTRPR2s/view?usp=sharing
\ No newline at end of file
diff --git a/README.md b/README.md
new file mode 100644
index 0000000..2a3633f
--- /dev/null
+++ b/README.md
@@ -0,0 +1,49 @@
+This is a Spark/Cassandra demo using the open-source **[Spark Cassandra Connector]**
+
+There are 2 packages with 2 distinct demos
+
+* _us.unemployment.demo_
+
+ - Ingestion
+
+ - FromCSVToCassandra: read US employment data from CSV file into Cassandra
+ - FromCSVCaseClassToCassandra: read US employment data from CSV file, create case class and insert into Cassandra
+
+
+ - Read
+
+ - FromCassandraToRow: read US employment data from Cassandra into CassandraRow low-level object
+ - FromCassandraToCaseClass: read US employment data from Cassandra into custom Scala case class, leveraging the built-in object mapper
+
+
+
+
+* _weather.data.demo_
+
+ - Data preparation
+
+ - Go to the folder main/data
+ - Execute $CASSANDRA_HOME/bin/cqlsh -f weather_data_schema.cql from this folder. It should create the keyspace spark_demo and some tables
+ - Download the Weather_Raw_Data_2014.csv.gz from here
+ - Unzip it somewhere on your disk
+
+
+ - Ingestion
+
+ - WeatherDataIntoCassandra: read all the Weather_Raw_Data_2014.csv file (30.106 lines) and insert the data into Cassandra.
+ Please do not forget set the path to this file by changing the WeatherDataIntoCassandra.WEATHER_2014_CSV value
+
+ This step should take a while since there are 30.106 lines to be inserted into Cassandra
+
+ - Read
+
+ - WeatherDataFromCassandra: read all raw weather data plus all weather stations details,
+ filter the data by French station and take data only between March and June 2014.
+ Then compute average on temperature and pressure
+
+ This step should take a while since there are 30.106 lines to be read from Cassandra
+
+
+
+[Spark Cassandra Connector]: https://github.com/datastax/spark-cassandra-connector
+[here]: https://drive.google.com/a/datastax.com/file/d/0B6wR2aj4Cb6wOF95QUZmVTRPR2s/view?usp=sharing