ETL to QE, Update 24, Roadmap Revisited with Memes
Original Roadmap
- Discord Analytics Reports and Dashboard
- Graph Based Annotation on Top of Discord Data
- Allow for Generalized Questioning and add Additional Data Sources
- Proof of Meme Micro Bounty Platform
Updated Roadmap
TL;DR, Memes, Schema, Tokens, Merkle Trees
- Memes, CGFS Meme Model;
- Description: Come up with a message format that we can transform existing social media into, include encryption and signing of messages with this format
- Define and update self referencing systems of memes known as ontologies for purposes of tagging data.
- Composable Message Standard
- Based on Research - Format of messages from different messaging apps
- Must be able to index into and from Obsidian and Tiddly Wiki
- Must be able to index into and from Raindrop.io
- Must be able to index into and from Hypothes.is
- Must be able to index from ActivityWatch
- Must be able to index from Git
- Must be able to index from social media including Keybase, Discord, Twitter, Facebook, Signal etc.
- Must be able to index emails
- Synthases of messages standard
- Memes must be able to integrate and link with one another
- Extendable cryptographic identity standard
- DID standard for existing social media accounts such as Discord, Linkedin, Facebook etc. etc.
- Schema, CGFS Persona Schema
- Description: Come up with a generated schema that for social media that we can reindex social media to
- Develop user journeys
- Come up with a simple to user interface for contextualizing all memes one has
- Context
- QE is supposed to to be modular and composable like Obsidian allowing the user to develop their own or adopt the social media schema they see fit.
- Using QE everyone you communicate with has to either tell you their name, you can ask their name, or you have to assign a name to them.
- Tokens QE - Token Specification;
- Description: Come up with a signature chain proof of concept that individuals can issue
- Research and review existing token standards
- See question-engine/backend/transactions for reference design that needs to be reviewed and re implemented using DAG-JSON
- Merkle Trees, Proof of Meme, QE - Proof of Meme
- Description: Come up with merkle tree and data availability mechanisms to store proof of memes on the blockchain.
- Research and review
- Compare existing merkle proof libraries
- On Chain Merkle Proofs
- Validate usability of my Eth Waterloo 2023 Project
- User Journey Validation
- Write a library that can
- Create DAG-JSON merkle trees
- Store, Backup, and Share raw data from merkle trees
- Generate Merkle Proofs
- Validate Merkle Proofs
- Publish merkle roots to various blockchains
- Read merkle proof from various blockchains
Reflection + Rant
Intelligence is just breaking down problems into doable chunks. I have completed the first step of the Original Roadmap via my Discord Analytics Reports and Dashboard. Now I am onto the second phase, Graph Based Annotation on Top of Discord Data.
I have been twiddling my thumbs for months waiting for the right strike of inspiration to proceed to phase two, Graph Based Annotation on Top of Discord Data. My initial plan was to implement a user model by just writing some SQL or use a ORM like Prisma or SQLAlchemy. Have users create an account using OAuth, Email, or MetaMask then add in the features for the user journeys I have outlined in Epic User Journeys. The first user journey would be to add tags then then add features for rankings, comments, and links to the web such as wikipedia and linkedin.
I was unable to break down the task. Here are some thoughts that were going through my head. I attended a hackathon six months back and getting email setup was a bitch, using gmail, Microsoft, or protonmail. Then you sign up for a email SAAS offering and they have some convoluted API. Then there was the OAuth option, having to manage a Google API key just goes against my entire ethos of Self Hosting. Once I see a self hosted app requires an API key for some service of some kind I suddenly don't want to deploy it. Having a separate database, sure no problem just update the docker-compose, want to be fancy and use object storage, sure just add minio to the docker-compose. Setting up a domain name with TLS then getting a whatever OAuth API Key or whatever you need.... Pardon my french but fuck that. I understand that if this project gets users and possibly funding email and OAuth may need to be implemented but what if there was a way to build up from a more fundamental authentication model.
Well the Authentication and User model are inherently linked. I am afraid of committing to some custom user model in an ORM or SQL due to my inability to easily understand the user model used in my favorite open source projects. I don't understand what Django is doing under the hood, Jellyfin does IDK what, I check out Immich and I get this monstrosity, wikijs has this monstrosity. Actually let's create a list of these.
- Immich = this
- wikijs = this
- Lemmy = this
- Misskey = this
- mediagoblin = this
- Plausible = this
- ArchiveBox = this
- Pihole = this
- Nextcloud = this
- Home Assistant = this
- logseq = this
- Synapse - Matrix = this though they do make it really simple here
- Mastodon = this
- AdGuard = Uses System Logs not SQL
- For more check out Research - Schema Comparisons
Wouldn't it be cool if all these systems were built in raw SQL and used PRQL. Actually there might be something to this.
- People do not use PRQL because it is not SQL Database Agnostic like ORMs
- I bet that could be figured out
- Whatever custom solution on top of PRQL is going to have to be written in some programming language everyone is not going to agree on
- This is yet another case of the Standards XKCD
- The ORM SQL Library space does not have standards, every programming language has multiple ways of interacting with databases
- PRQL is just supposed to be used for migrations and schema setup, the actual application can connect to the database however they want.
- Actually can PRQL reverse engineer the schema migration by looking at two separate schema dumps?
- If you are supporting multiple database backends you either need to stay within the requirements of all 3 or specialize such as how wikijs 3.0 will support only postgres
- DuckDB has connectors to all 3 databases which I hope would make stuff simpler.
- Is JSON support different between DuckDB and SQLite?
Well it now seems like I have a bunch of research topics rather than features to build. The topics are as follow?
- What jsonschema format can be used for a meme data type that is compatible with most existing social media and extendable?
- How do different open source projects user models differ?
- What is the best browser extensions and or wallets for signing data?
Three questions, that's good enough any more and the task of sorting becomes convoluted. You can not answer each of these before having to worry about anything else.