Categories: How To, Implementation, Qualitative Data, | Sun, 07 Jun 2020 13:30:00 GMT
Blogs > https://www.sociallyconstructed.online/blogs/rias-2nd-week-the-scms-in-airtable
At the beginning of the month we were accepted and paired with a talented Software Developer, Ria Gupta, to implement the Social Currency Metrics System in GrimoireLab for Google Summer of Code.
Last week wasRia’s first week and in this blog series we are cross-posting Ria’s personal blog detailing every step of the process!
Series Blog Contents:
Check here for all published blogs.
Announcement: Ria’s Journey begins!
Week 1: Ria’s 1st week
Week 2: The SCMS in Airtable
Week 3: Preparations & Superheroes
Week 4: Putting Code to PIxel
Week 5: The SCMS Data’s Alive!
Week 6: Airtable to Google Sheets
Week 7: Our 1st Visualizations
This blog has been cross-posted from Ria’s blog with her permission…
The social bonding period continues
We largely discussed the past 2 weeks’ progress and understanding of SCMS (Social Currency Metric System) in more detail. It was similar to a training to understand the importance of qualitative data over quantitative. The main agenda of the training was “Why qualitative data is rejected in business, and how reframing its collection using the SCMS makes it useful in businesses?” It was a very informative presentation delivered by Samantha and Dylan.
This was the first training of the training series which includes a total of six training sessions. Analysing trends in the data and helps in building context, unlike Quantitative data which isolates trends. For understanding this better and having a better first-hand experience, I’ll be implementing a personal SCMS system this week.
What I did this past week
- I implemented a working SCMS on Airtable using collecting Tweets of Amazon. Just for the initial setup, I’ve used a small database i.e around 10–15 records. You can find it here. It involved defining a Communication Trace, I had selected twitter, can be extended to include more platforms; defining a meaningful codex, Tagging data on the basis of Utility, Trust, Transparency, Consistency, Merit.
- A pilot study towards building an Implementation Sketch was done. The repository can be seen here. For this, I created a new enricher for mbox (ScmsMboxEnricher), Changed the attributes of data present like SubjectAnalysed-> Scms_Subject_Analysed or Body_Extract -> Scms_Body_Extract. Created a new pipermail enricher inheriting from scmsmbox. Removed all data except the 5–6 attributes mentioned i.e uuid,project,project_1,grimoire-creationdate,origin, Subject_analysed and Body_extract. Executed micro-mordred to collect and enrich data from mbox. Dumped the enriched data to an ElasticSearch index. Made a script ES2Excel which will place all attributes of data received in different columns of the excel. Output CSV file.
- Understood the interaction between Perceval and ELK and Kidash via terminal commands. Explored p2o.py which can be used to enrich the data extracted. It returns 2 data sets, raw and enriched index. Used Kidash to make a dashboard of the data present at localhost:9200.p2o was used before micro-modred and is decommissioned. Also gained some basic understanding of raw data and enriched index data.