diff --git a/.github/images/cmd.gif b/.github/images/cmd.gif
new file mode 100644
index 0000000..3d1cef9
Binary files /dev/null and b/.github/images/cmd.gif differ
diff --git a/.github/images/csv2sql.png b/.github/images/csv2sql.png
new file mode 100644
index 0000000..3383cc0
Binary files /dev/null and b/.github/images/csv2sql.png differ
diff --git a/.github/images/dashboard.gif b/.github/images/dashboard.gif
new file mode 100644
index 0000000..c88db4a
Binary files /dev/null and b/.github/images/dashboard.gif differ
diff --git a/.gitignore b/.gitignore
index bad9a48..d0f59be 100644
--- a/.gitignore
+++ b/.gitignore
@@ -19,20 +19,8 @@ erl_crash.dump
# Also ignore archive artifacts (built via "mix archive.build").
*.ez
-# Ignore package tarball (built via "mix hex.build").
-csv2sql-*.tar
-
# linter
/.elixir_ls/
-# schema file
-schema.sql
-
-# config file
-/config.env
-
# Formatting file
.formatter.exs
-
-# escipt binary
-csv2sql
diff --git a/LICENSE.md b/LICENSE.md
new file mode 100644
index 0000000..63b4b68
--- /dev/null
+++ b/LICENSE.md
@@ -0,0 +1,21 @@
+MIT License
+
+Copyright (c) [year] [fullname]
+
+Permission is hereby granted, free of charge, to any person obtaining a copy
+of this software and associated documentation files (the "Software"), to deal
+in the Software without restriction, including without limitation the rights
+to use, copy, modify, merge, publish, distribute, sublicense, and/or sell
+copies of the Software, and to permit persons to whom the Software is
+furnished to do so, subject to the following conditions:
+
+The above copyright notice and this permission notice shall be included in all
+copies or substantial portions of the Software.
+
+THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
+IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
+FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE
+AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
+LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM,
+OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE
+SOFTWARE.
\ No newline at end of file
diff --git a/README.md b/README.md
index 16f5a63..aaa4fd3 100644
--- a/README.md
+++ b/README.md
@@ -1,288 +1,266 @@
-# Csv2Sql
-
+
+
+
+
CSV2SQL - Blazing fast csv to database loader!
-
-
-## What is Csv2Sql ?
+## Table of Contents
+1. [What is Csv2sql ?](#what)
+2. [Why Csv2sql ?](#why)
+3. [Using from Command Line](#cmd)
+ 1. [Installation and usage](#cmdinstall)
+ 2. [Availlable command line arguments](#cmdargs)
+ 3. [Examples of usage](#cmdexamples)
+4. [Using the browser based interface](#dashboard)
+ 1. [Installation and usage](#dashboardinstall)
+5. [Running from source](#sourceinstall)
+6. [Supported data types](#support)
+7. [Known issues, caveats and troubleshooting](#issues)
+8. [Future plans](#future)
+*Please have a quick look over the [Known issues, caveats and troubleshooting](#issues) section before using the app.*
-Csv2Sql is a blazing fast fully automated tool to load huge csv files into a mysql database.
+
+## What is Csv2sql?
-
+Csv2Sql is a blazing fast fully automated tool to load huge [CSV](https://en.wikipedia.org/wiki/Comma-separated_values) files into a [RDBMS](https://en.wikipedia.org/wiki/Relational_database).
Csv2Sql can automatically...
-
-
* Read csv files and infer the database table structure
-
-
-
-* Create the database and the required tables
-
-
-
+* Create the required tables in the database
* Insert all the csvs into the database
+* Do a simple validation check to ensures that all the data as been imported correctly.
-
-
-* Validate that all the csvs have been correctly imported to the database
-
-
-
-
-## Why Csv2Sql ?
-
-
-
-* Using the power cheap processes in elixir, Csv2Sql does all the tasks
-in parallel whether its infering table schemas from csv files or inserting huge csvs into the database, this makes it super fast and efficient.
-
+
+## Why Csv2sql ?
-* When inserting huge files Csv2Sql uses multiple processes which inserts multiple portions of the same file
-into the database parallely thus improving insertion speeds immensely (~35% faster than a script written in other languages)
+* Utilizing the power of moden multi core processors, csv2sql does most of its tasks in **parallel**, this makes it super fast and more efficient than other tools.
-
-
-* Csv2Sql uses streams, to lazily read huge csv files, thus it has minimal memory footprint
+* It is **completely automatic**, provide a path with hundereds of csvs having size in gigabytes and start the application, it will handle the rest!
-
+* It comes in **2 flavours**, as a **[command line tool](#cmd)** or a **[browser user interface](dashboard)**, and is super easy to configure and use.
-* Csv2Sql comes with lots of customizable options which can be changed to fine tune the application based on requirement.
+* While you can have maximum utilization of your cpu to get execellent performance, csv2sql is fully **customizable**, also comes with [lots of options](#cmdargs) which can be changed to fine tune the application based on requirement and to lower down resource usage and database load.
-* Csv2Sql supports partial operations, so if you only want to generate a schema file from the csvs without accessing the database or only insert data from the csvs into already created tables without creating the tables again or validate already imported data, everthing is possible with Csv2Sql.
+* Csv2Sql supports **partial operations**, so if you only want to generate a schema file from the csvs without touching the database or you want to only insert data from the csvs into already created tables without creating the tables again or just validate already imported data, Csv2Sql has got you covere !
-* Csv2Sql can be configured to reduce cpu usage at the cost of slower speed or increase speed at the cost of more cpu usage and database load.
-
-
-
-* It is completely automatic, provide a path with lots of csvs and start the application, it will handle the rest
-
-
-## Run from executable escript
-
-Download the [Csv2Sql executable escript](https://github.com/Arpan-Kreeti/Csv2Sql/blob/master/executable/Csv2Sql.zip).
-
-You must have mysql and erlang installed to run Csv2Sql...
-
-
-### You must first install erlang
-
-#### Add the erlang repository using the following commands
+
+## Using from command line
+
+Csv2sql can be easily used as a command line tool, with lots of customizable options passing by different command line arguments.
+
+
+
+
+### Installation and usage:
+You must have erlang installed to use the command line tool on any linux distribution.
+##### Add the erlang repository using the following commands
```
-
wget https://packages.erlang-solutions.com/erlang-solutions_1.0_all.deb
-
sudo dpkg -i erlang-solutions_1.0_all.deb
-
```
-
-
-#### Install erlang
-
-
+##### Install erlang
```
-
sudo apt-get update
-
sudo apt-get install esl-erlang
-
```
-
-
-
+Download the executable binary from the latest release in this repository
+and run the executable using: ```./csv2sql --```
-### Start the app
-Start Csv2Sql by ```./Csv2Sql --```
+The next section describes all the avialable command line arguments.
-
+
### Using command line args
-
-
You can pass various command line arguments to Csv2Sql to configure how to process csvs and specify other important information.
-Any command line argument if specified will override the corresponding environement varaible.
-
-A description of all the available command line arguments that can be used are given below.
-
-
- | Flag | Description | Default value |
-|:-----------------------:|----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|-----------------------------------------------------------------------------------------------------------------------------------------------------|
-| --schema-file-path | The location were the generated schema file will be stored | If no value is supplied it saves the generated schema file in the same directory as the source csv files specified by "--source-csv-directory" flag |
-| --source-csv-directory | The source directory where the csvs are located | Defaults to the current directory from which the program is run |
-| --imported-csv-directory | The directory were the csvs will be moved after importing to database, make sure it is present and is empty | (source-csv-directory)/imported |
-| --validated-csv-directory | The directory were the csvs will be moved after they are validated, make sure it is present and is empty | (source-csv-directory)/validated |
-| --skip-make-schema | Skip infering schema and makign a schema file | false |
-| --skip-insert-schema | Skip inserting the infered schema in the database. Usefull if the table structures are already present and you only wish to insert data from the csv files.(This will be true automatically if skip-make-schema is used) | false |
-| --skip-insert-data | Skip inserting data from the csvs | false |
-| --skip-validate-import | Skip validating the imported data | false in | None, this is compulsary if the operations specified requires database access |
-| --connection-socket | The mysql socket file path | /var/run/mysqld/mysqld.sock |
-| --varchar-limit | The value of varchar type, and the limit after which a string is considered a text and not a varchar | 100 |
-| --schema-infer-chunk-size | The chunk size to use when the schema fora CSV will be inferred parallelly. For example, a chunk size 100 means the CSV will be read 100 rows at a time and separate processes will be used to infer the schema for each 100-row chunk | 100 |
-| --worker-count | The number of workers, directly related to how many CSVs will be processed parallelly | 10 |
-| --db-worker-count | The number of database workers, lowering the value will lead to slow performance but lesser load on database, a higher value can lead to too many database connection errors. | 15 |
-| --insertion-chunk-size | Number of records to insert into the database at once, increasing this may result in mysql error for too many placeholders | 100 |
-| --job-count-limit | Number of chunks to keep in memory (Memory required=insertion_chunk_size * job_count_limit) | 10 |
-| --log | Enable ecto logs, to log the queries being executed, possible values are :debug, :info, :warn | false |
-| --timeout | The time in milliseconds to wait for the query call to finish | 60000 |
-| --connect-timeout | The number of seconds that the mysqld server waits for a connect packet before responding with Bad handshake | 60000 |
-| --pool-size | The pool_size controls how many connections you want to the database. | 20 |
-| --queue-target | The time to wait for a database connection | 5000 |
-| --queue-interval | If all connections checked out during a :queue_interval takes more than :queue_target, then we double the :queue_target. | 1000 |
-
-
-## Examples:
-#### Load csvs to database, this will infer the schema, insert the infered schemas to the database, insert the data and then validate data for all the csvs
-
-`./csv2sql --source-csv-directory "/home/user/Desktop/csvs" --db-connection-string "root:mysql@localhost/test_csv"`
-
-#### Import schema only:
+A description of all the available command line arguments that can be used are given below:
+
+
+| Flag | Description | Default value |
+|:-----------:|----------------------|------|
+| \-\-schema-file-path | The location were the generated schema file will be stored | If no value is supplied it saves the generated schema file in the same directory as the source csv files specified by "\-\-source-csv-directory" flag |
+| **\-\-source-csv-directory** | **The source directory where the csvs are located** | **Defaults to the current directory from which the program is run** |
+| **\-\-db-connection-string** | **A connection string to connect ot the database, in the format: "::@/"** | **This is a compulsary argument if database access is required** |
+| \-\-imported-csv-directory | The directory were the csvs will be moved after importing to database, make sure it is present and is empty | (source-csv-directory)/imported |
+| \-\-validated-csv-directory | The directory were the csvs will be moved after they are validated, make sure it is present and is empty | (source-csv-directory)/validated |
+| \-\-skip-make-schema | Skip infering schema and makign a schema file | false |
+| \-\-skip-insert-schema | Skip inserting the infered schema in the database. Usefull if the table structures are already present and you only wish to insert data from the csv files.(This will be true automatically if skip-make-schema is used) | false |
+| \-\-skip-insert-data | Skip inserting data from the csvs | false |
+| \-\-skip-validate-import | Skip validating the imported data | false in | None, this is compulsary if the operations specified requires database access |
+| \-\-connection-socket | The mysql socket file path | /var/run/mysqld/mysqld.sock |
+| \-\-varchar-limit | The value of varchar type, and the limit after which a string is considered a text and not a varchar | 100 |
+| \-\-schema-infer-chunk-size | The chunk size to use when the schema fora CSV will be inferred parallelly. For example, a chunk size 100 means the CSV will be read 100 rows at a time and separate processes will be used to infer the schema for each 100-row chunk | 100 |
+| \-\-worker-count | The number of workers, directly related to how many CSVs will be processed parallelly | 10 |
+| \-\-db-worker-count | The number of database workers, lowering the value will lead to slow performance but lesser load on database, a higher value can lead to too many database connection errors. | 15 |
+| \-\-insertion-chunk-size | Number of records to insert into the database at once, increasing this may result in mysql error for too many placeholders | 100 |
+| \-\-job-count-limit | Number of chunks to keep in memory (Memory required=insertion_chunk_size * job_count_limit) | 10 |
+| \-\-log | Enable ecto logs, to log the queries being executed, possible values are :debug, :info, :warn | false |
+| \-\-timeout | The time in milliseconds to wait for the query call to finish | 60000 |
+| \-\-connect-timeout | The number of seconds that the mysqld server waits for a connect packet before responding with Bad handshake | 60000 |
+| \-\-pool-size | The pool_size controls how many connections you want to the database. | 20 |
+| \-\-queue-target | The time to wait for a database connection | 5000 |
+| \-\-queue-interval | If all connections checked out during a :queue_interval takes more than :queue_target, then we double the :queue_target. | 1000 |
+
+
+### Examples:
+
+##### Load csvs to database, this will infer the schema, insert the infered schemas to the database, insert the data and then validate data for all the csvs
+
+`./csv2sql --source-csv-directory "/home/user/Desktop/csvs" --db-connection-string "mysql:root:pass@localhost/test_csv"`
+
+Here "mysq|" is the database type, "root" is the mysql username, "pass" is the mysql password, "localhost" is the database host and "test_csv" is the database name where the data will be imported.
+
+---
+##### Import schema only:
`./csv2sql --source-csv-directory "/home/user/Desktop/csvs" --skip-insert-schema --skip-insert-data --skip-validate-import`
+---
+##### Skip validation:
-#### Skip validation:
-
-`./csv2sql --source-csv-directory "/home/user/Desktop/csvs" --db-connection-string "root:mysql@localhost/test_csv" --skip-validate-import`
-
- #### Only validate imported csv
- `./csv2sql --skip-make-schema --skip-insert-data --imported-csv-directory "/home/user/Desktop/imported-csvs" --db-connection-string "root:mysql@localhost/test_csv"`
-
-
-#### Custom path for imported and validated csv files:
-
-`./csv2sql --source-csv-directory "/home/user/Desktop/csvs" --imported-csv-directory "/home/user/Desktop/imported_csvs" --validated-csv-directory "/home/user/Desktop/validated_csvs" --db-connection-string "root:mysql@localhost/test_csv"`
-
-
-
-
-#### Only infer and create schema but don't insert data:
-
-`./csv2sql --source-csv-directory "/home/user/Desktop/csvs" --skip-insert-data --db-connection-string "root:mysql@localhost/test_csv"`
-
-
-
-
-#### Change the worker count, setting this to one will lead to processing a single csv at a time, this will be slower but will lead to lower cpu usage:
+`./csv2sql --source-csv-directory "/home/user/Desktop/csvs" --db-connection-string "postgres:root:pass@localhost/test_csv" --skip-validate-import`
-`./csv2sql --source-csv-directory "/home/user/Desktop/csvs" --db-worker-count 1 --db-connection-string "root:mysql@localhost/test_csv"`
+Here "postgres" is the database type.
+---
+##### Only validate imported csv:
+ `./csv2sql --skip-make-schema --skip-insert-data --imported-csv-directory "/home/user/Desktop/imported-csvs" --db-connection-string "mysql:root:pass@localhost/test_csv"`
+Here we are running simple validation check over a previously imported csvs, this check will NOT compare the actual data but will only compare the row count in the csv and in the database.
-#### Enable logs, to log the queries being executed:
+---
+##### Custom path for imported and validated csv files:
-`./csv2sql --source-csv-directory "/home/user/Desktop/csvs" --log debug --db-connection-string "root:mysql@localhost/test_csv"`
+`./csv2sql --source-csv-directory "/home/user/Desktop/csvs" --imported-csv-directory "/home/user/Desktop/imported_csvs" --validated-csv-directory "/home/user/Desktop/validated_csvs" --db-connection-string "postgres:root:pass@localhost/test_csv"`
-
-
+---
+##### Only infer and create schema but don't insert data:
-#### Set the number of workers inserting data into the database, lowering the value will lead to slow performance but lesser load on database, a higher value can lead to too many database connection errors:
+`./csv2sql --source-csv-directory "/home/user/Desktop/csvs" --skip-insert-data --db-connection-string "postgres:root:pass@localhost/test_csv"`
-`./csv2sql --source-csv-directory "/home/user/Desktop/csvs" --db-worker-count 2 --db-connection-string "root:mysql@localhost/test_csv"`
+This will create empty table in the database after analyzing the csvs.
-
-
+ ---
+##### Change the worker count, setting this to one will lead to processing a single csv at a time, this will be slower but will lead to lower cpu usage and Database load:
-### Run using configuration files
+`./csv2sql --source-csv-directory "/home/user/Desktop/csvs" --worker-count 1 --db-connection-string "mysql:root:pass@localhost/test_csv"`
- You can also use environment varaibles to to specify various arguments to csv2sql if the number of arguments is very large and difficult to specify as command line arguments.
+---
+##### Enable logs, to log the queries being executed:
-cd into the directry with the configuration file and Csv2Sql executable
+`./csv2sql --source-csv-directory "/home/user/Desktop/csvs" --log debug --db-connection-string "mysql:root:pass@localhost/test_csv"`
-
+---
+##### Set the number of workers inserting data into the database, lowering the value will lead to slow performance but lesser load on database, a higher value can lead to too many database connection errors:
-Edit the ```config.env``` file according to your requirments.
+`./csv2sql --source-csv-directory "/home/user/Desktop/csvs" --db-worker-count 2 --db-connection-string "mysql:root:pass@localhost/test_csv"`
-
+
+## Using csv2sql from your browser
-### Load configurations
+For ease of use csv2sql also has a browser interface which can be used to easily configure the tool and also provides and execent interface that shows what is the progress of the various running tasks, which files are currently being processed, the current cpu and memory usage, etc.
-
+