hide You are viewing an archived web page, collected at the request of Ethereum Foundation using Archive-It. This page was captured on 16:18:04 Jun 24, 2021 , and is part of the Community collection. The information on this web page may be out of date. See All versions of this archived page. Loading media information

Congratulations Humanity, you have reached SINGULARITY



  • harveybc0harveybc0 Member Posts: 29
    edited May 2014
    The n-blockchain has to have a functionality to store a sequence of changes of the expert's genome, for this, it will be a chain of blocks containing of the turing tape (file) that describe the genome and each block is chained with the previous one. Could be the normal bitcoin block a little longer to include a field for this kind of history, like a storage for the changes in the field in the blocks? or as an Intermediate block? between two normal bitcoin blocks in a blockchain. Do you know any approach to do this? or it should be done from scratch?
  • harveybc0harveybc0 Member Posts: 29
    And finally, the n-blocks sould contain also the taxonomic position of the experts (other turing segment), this would allow the implementation of the n-coin over a normal altcoin code like Gridcoin. What do you think?
  • harveybc0harveybc0 Member Posts: 29
    edited May 2014
    The lenght of the taxonomic position is the main factor in the difficulty (proportional to Kolmogorov complexity), other factor is the last efficiency increment/cpu usage ratio for the expert and it is also part of the n-block, maybe reducible only to the eficiency increment/cpu?.
  • harveybc0harveybc0 Member Posts: 29
    I would love to see some experts used(trained/evaluated) in Elder Scrolls Online... that is a dream i have jejejej
  • StephanTualStephanTual London, EnglandMember, Moderator Posts: 1,282 mod
    @ranford‌ - I can see one big advantage to use Ethereum to build an alternative to Gridcoin. My understanding is that somehow Gridcoin requires the user to run BOINC while mining Gridcoin.

    An alternative on top of ethereum wouldn't require the user to mine Ethereum, instead, 100% of the CPU power would go to the BOINC client. When work is done on BOINC, the patched BOINC client notifies a smart contract, and the meta-currency is arbitrarily assigned.
  • harveybc0harveybc0 Member Posts: 29
    edited May 2014
    @Stephan_Tual that is very good but it would be better if you can use your own configurable minners (per app-coin) with any kind of process that notifies when the work is done and how much work was done and assign the meta-currency like the BOINC client you mentioned.

    That would open a whole world of applications for Ethereum due to the confgurable minning process.

    But the problem is that the distributed processing would require an entire new protocol for the minning communications like the propossed n-chainblock because BOINC is not P2P, and a new BOINC network for Ethereum miners (to used the coin minning as a cpu pool) requires BOINC servers and clients, that is the reason i can't use it as expert-evaluating platform for Singularity, it must be done at app-level in a P2P way.

  • harveybc0harveybc0 Member Posts: 29
    edited May 2014
    @Stephan_Tual, the proposed protocol modification is to add a intermediate block in the Ethereum blockchain.

    1.This block is generated by the miners while processing something when a work is done.
    2.Additional Meta-currency to the normally generated with ~1%CPU mining is arbitrarily assigned to the miners via smart contract when the work is done.
    3.If no work was done like in a normal Ethereum miner, a void block is generated as intermediate block.
    4.The Ethereum normal blockchain generation and transactions remain independent skipping this intermediate block entirely.

    The new block requires a encryption approach yet to define, but this is the basic design of the new protocol. What do you think?

    Post edited by harveybc0 on
  • harveybc0harveybc0 Member Posts: 29
    edited May 2014
    I found a problem, the blockchain gets too long with the time and is difficult to search in it if done as proposed.

    There is no need to modify the Ethereum blockchain, because the individual blockchains required for each category are shared resources for the miners.

    This way is much easier to search in the categories, as the name of each shared blockchain can be the coordinate of the category.

    The normal Ethereum mining is used but at ~1% CPU and the ~99% is used to neuroevolve the experts with a smart contract to assign arbitrarily the meta-currency for the work done.

    Minning proces is like this:

    1. When a miner finds a efficiency increment in a category, he broadcasts the genome for consensus.

    2. After verification of the genome, all the miners training that category generate the smart contract transations to receive currency proportional to the cpu invested in the training (mesaured in each iteration) because all are training the same category.

    3.When a miner receives payment, it has to choose other expert to train so he generates a list of random categories from the taxonomy and select from them the simpler to train.

    The step 3 purpose is to avoid long search times if the number of categories is large. The miners always process the simpler category to train measured as eficiency_change_in_last_iteration/cpu_for_that_change for the category, this data is extracted from the taxonmy (shared resource).

    I think this is the best way to implement the machine. I will start working in the Ethereum mining tomorrow to insert the neuroevolution or custom processing in it as a start.

    Post edited by harveybc0 on
  • harveybc0harveybc0 Member Posts: 29
    edited May 2014
    The taxonomical categories as shared resources (like bittorrent) files, will contain the history of the category transactions in the n-blockchain but have lots of additional application usable information for each category.

    This way games can use the Singularity AI Engine to save alongside with the neuroevolution data, persistent data of NPCs (non-player-characters) like their state (sleeping, scouting, eating, reproducing?) in MMORPGs or other massive games.

    While their gaming platforms are client-server, they can connect to Singularity(P2P) for processing all the AI and saving the states of the persistent characters.

    But as a AI programming framework it can do a lot more, that is why i am taking so long in the proper protocol definition.

    The category is an object containing:

    -The n-blockchain that serves as evolution and transaction history with the Ethereum Coin.
    -Connections with father and son categories (fractal coordinates)
    -Connections with other categories for describing relations or grouping (fractal coordinates)
    -Descriptions for all connections
    -An abstract organized data set indexed with fractal coordinates for generic application data like the state of a NPC or a iterative simulation object.

    The whole NeuralZoo taxonomy is a category object for storage of history of the changes made when the users create/modify a category ie. creating a game NPC as an AI expert (or without de AI).

    Comments, sugestions, questions, critics, welcome.
    Post edited by harveybc0 on
  • harveybc0harveybc0 Member Posts: 29
    Like the NeuralZoo taxonomy, others can be created for applications different to AI wich can use the Singularity architecture based on Ethereum for resource management of the app.
  • harveybc0harveybc0 Member Posts: 29
    edited May 2014
    A friend has a idea for a social app that will run on this P2P architecture, maybe it will be a good idea to use it as second app of a marketplace (like the in cell phones) for programs using the Singularity P2P engine and i will try to use this 3 apps (distributed marketplace, social and AI) as proof of concept and as my first 3 apps in the Singularity marketplace.
    This way is easier for programmers to make P2P applications and publish them free or for comercial use.
    Post edited by harveybc0 on
  • harveybastidasharveybastidas Member Posts: 20
    edited May 2014
    I have found another problem with the implementation.In the minners, the concensus it is totally vulnerable as it only depends on a measurement of the CPU/Storage/transmission cost and it can be modified to receive invalid payments. So it will be difficult to generalize (if possible) the AI process without incurring in security vulnerabiliies.

    The only alternative i can see as replacement of the "proof of work" mechanism is the AI because the consensus is made with the best genome passed to other peers to confirm its efficiency(fitness) so it is a secure way of consensus, unless a hacker make a random genome with better efficiency wich will in fact be benefical for the network, or unless all the minners are compromised in a mayor Hack.

    Other important aspect for using AI as proof of work is that the genomes have the desired characteristic of being very difficult to generate (encoding) but very easy to evaluate (decoding) for consensus using a distributed dataset.

    I will make the minner by modifying the Etherum client, and inserting a method that manages the neuroevolution and it's n-blockchain and it's consensus mechanism to generate currency in ethereum via a smart contract based in the last incrementof efficiency of each expert.

    If someone has a better idea or something else that can replace or complement the existing minning process, i will help.
  • harveybastidasharveybastidas Member Posts: 20
    edited May 2014
    I would love to try the new Neuroevolution Technique from Dr. Ken Stanley and Sebastian Risi in this architecture, but that may take a while.


    Other project using the enhanced neuroevolution algotithm:

    An interesting option for P2P storage and web publishing platform for apps is The Freenet:
    Post edited by harveybastidas on
  • harveybastidasharveybastidas Member Posts: 20
    edited May 2014
    For P2P storage i am using GNUnet:


    Initially, all the experts will contain coin wallets for bitcoin and the miners will use 1% of CPU to mine bitcoin (from minning pools) in those wallets, 99% of CPU and all the GPU will do the AI training and the miners training the expert will receive bitcoin from the expert (like a minning-pool for each expert) plus they will receive currency in a singularity wallet using the genome proof of work instead of the nonce PoW. When an expert is obsolete, it transfers all it´s wallets contens to the new expert as incentive to the miners.

    Singularity is used to pay for private IA training capacity like persistent characters in games, non supervised training or dedicated real time Forex expert for automated trading. I will publish all advances made, but now i am just testing GNUnet and starting a program in C++ to test it, it will take a while.

    A initial diagram of this idea is attached (in spanish).
    Post edited by harveybastidas on
  • CubeSpawnCubeSpawn Member Posts: 33 ✭✭✭
    @harveybastidas In reading through your thread, I see great work here!! I also wanted to give a little update on CubeSpawn. the links are on the CubeSpawn thread http://forum.ethereum.org/discussion/98/cubespawn-an-affordable-distributed-open-source-fms-flexible-manufacturing-system#latest Thx!
  • harveybastidasharveybastidas Member Posts: 20
    edited June 2014
    Some nice "dedicated hardware miners" that i would like to use:


    more info:

    I am implementing the shared taxonomy object and the initial taxonomy. Also i am making a simple initial XOR expert in the taxonomy and training it with NEAT using miner nodes, and client nodes can download the trained expert or upload datasets to evaluate in the miners.

    Then i will make the currency generation using the genomes as proof of Work as described so currency can be paid to the miners (this is a difficult part).

    Then comes the most difficult task of making the hybrid node, and some application interfaces, but i will love to see that working.

    As soon as i have something running i will post it here, i expect to have something for the next weekend, but i can promise nothing because i have to work for a living...and i am a little short on time, but the nights are long... :)
    Post edited by harveybastidas on
  • harveybastidasharveybastidas Member Posts: 20
    edited June 2014
    I just finished the infamous fractal turing machine and the full taxonomy/category system. Now i am working in the actual minning process and this is the modification to the bitcoin block i am trying for Singularity. @Stephan_Tual, @Jasper possible drawbacks of this approach greatly appreciatted. For now the idea is let the merkle tree hash for the expert's neural network in the coinbase transaction input. And the block structure is as Follows:

    struct BlockHeader{ /// Modified from the Bitcoin block definition
    unsigned int version; /// same as Bitcoin
    std::string hash_previous_block; /// same as Bitcoin
    std::string hash_transaction_merkle_root; /// Transaction merkele tree root same as Bitcoin
    unsigned int time; /// Timestamp (seconds since 1970-01-01) same as Bitcoin
    //unsigned int bits; /// Packed difficulty (not used in Singularity))
    //unsigned int nonce; // Hash iteration counter, when overflows, increases
    // extraNonce in generation transaction(and hash_transaction_merkle_root) Not used in Singularity

    /// Added Section for Singularity:

    /// Singularity PoW not comparing if hash_previous_block is less than
    /// the unpacked bits (packed difficulty) as Bitcoin, but instead, to validate a block
    /// previous expert efficiency is compared to the actual trained one (proof of work)
    /// using the same dataset (hashed using a merkle tree root hash)
    double previous_efficiency; /// Efficiency of the previous expert
    std::string hash_dataset_merkle_root; /// Root merkle tree hash for dataset verification (unique expert+unique dataset=efficiency)
    /// The payment for the miners is: efficiency increase(<1) *kolmogorov complexity * dataset size * dataset entropy

    After solving these the only part left is the AI training, but it is already implemented for example in Encog for C++, sorry for the delay, but i have not much knowledge about Bitcoin. I tought it would be easier jejeje. As soon as i have the project complete i will upload it to git and publish documentation on the websites mentioned and in my company home page.

    As mentioned before, each expert in the taxonomy will have a Blockchain composed of this blocks.
  • harveybastidasharveybastidas Member Posts: 20
    edited June 2014
    correction: The hash (merkle tree root) of the expert must be in the block (not in the coinbase) because the dataset is fixed for the training expert and both expert and dataset are required to verify the efficiency increase.

    So the block structure basically is:

    -version: same as Bitcoin

    -hash_previous_block: same as Bitcoin

    -hash_transaction_merkle_root : same as Bitcoin

    -time: same as Bitcoin

    //bits: removed from Singularity (difficulty increases when efficiency increases)
    //nonce: removed from Singularity

    -previous_efficiency: efficiency of the previous expert evaluating the dataset

    -hash_dataset_merkle_root: merkle tree root hash for dataset verification

    -hash_expert_merkle_root: merkle tree root hash for expert verification

    As described before, singularity PoW dows not compare if hashMerkleBlock (actual block) is less than the unpacked bits (packed difficulty) as Bitcoin to validate a block, but instead, previous expert efficiency should be less than the one calculated evaluating the expert with the dataset (proof of work) using the same dataset hashed using a merkle tree root hash for content verification.
    Post edited by harveybastidas on
  • harveybastidasharveybastidas Member Posts: 20
    edited June 2014
    @Stephan_Tual‌ , @Jasper, using the Ethereum block header for Singularity, it would change to:

    block_header = [
    parent hash,
    coinbase address,
    // difficulty, //Not used in Singularity mining
    // nonce //Not used in Singularity mining

    //Added For Singularity mining:
    previous_efficiency, //efficiency of the previous expert
    hash_dataset_merkle_root, //hash for dataset verification
    hash_expert_merkle_root, //for expert verification(expert+dataset=new efficiency)

    If you can help me, my question is:

    Wich option do you think is the best to continue?

    1. To modify the simpler Bitcoin block like i described before
    2. Do the modification to the Ethereum block like in this post

    In the first case to use the stmart contracts with Ethereum i have to convert the coin mined while training to ether using a Smart contract (i do not know how to do that yet) and i would need to mine some ether simultaneously.

    In the second case, i can use the advanced capabilities of Ethereum, but I do not know how the change described in the header, required for external AI training as Ethereum minning PoW mechanism, will affect the rest of the Ethereum Framework since i think no change should be made to the transactions or other aspects of the coin.
    Post edited by harveybastidas on
  • harveybastidasharveybastidas Member Posts: 20
    edited June 2014
    nonce is used for other purpose in Ethereum, so it must remain the same for Singularity, the only change to the Ethereum block would be to replace difficulty by the previous_efficicency and add the two merkle hashes(expert and dataset) to the block.

    But the approach of a configurable difficulty field and another field for a set of data to specify the PoW like the two merkle tree hashes i am using, probably can be used for other "useful" or at least not energy-wasteful mining mechanisms but the PoW also must require more CPU power to be generated than the required to be validated (with the difficulty or eficiency) like in Singularity to be useful.
  • JasperJasper Eindhoven, the NetherlandsMember Posts: 514 ✭✭✭
    edited June 2014
    One does not simply change the Ethereum protocol.

    So... the idea is that there is this thing where people make 'experts' sortah-neural networky kindah things, and there is a problem to solve, and running an expert is quick? The first guy to get a good enough expert wins? Problem is, your problem may turn out too easy, or people find shortcuts and dont share, centralizing the 'expert mining'.

    To be honest.. i dont even see well enough what the idea is. To be frank, this is not what a proposal looks like, and neural nets are interesting, but i dont see this being very close to something concrete? (edit: i hope i am wrong..)
  • harveybastidasharveybastidas Member Posts: 20
    edited June 2014
    Sorry @Jasper, about the frequent changes in the design, but remember that i started from a concept (read the first post), the purpose of my program is to change the PoW of Bitcoin(preferably ethereum) to alternatively use AI training in the mining process.

    Remember i am not trying to make a coin but an application using a coin mining process. I hope the following explanation is not too long or confusing (or both).

    "One does not simply change the Ethereum protocol.">

    Since the beginning i proposed a changeof protocol (change of PoW), and i know it is no simple as you point, but is what i want to do. I do not mean to disrespect the people working on it by thinking it will be easy, that is why i asked you. I want to be useful to Ethereum making an application and collaborate in any other way i can.

    "So... the idea is that there is this thing ...."

    Yes, people can create experts uploading a training dataset and an expert for that dataset is created and begin to be trained.

    The objective of the application is to train all the experts in the taxonomy trying to get the highest possible efficiency of each one, each expert is divided in several training instances wich are trained by dfferent miners and they get paid if they manage to get a Efficicency increase in their training instance.

    "Problem is, your problem may turn out too easy..."

    You are right, only a quantity of coin is generated for each expert, specially the ones that can reach 100% efficiency, others will continue to train never reaching 100% like a forex expert.

    The good thing is that when they reach high efficiency, people can download the neural network and evaluate it with their inputs in some application and they dont have to train it again to use it.

    But this is another interesting part of the idea and the actual service wich i am trying to implement using Ethereum either using it to pay the miners (First option in my previous question) or by modifying the pkt header (second option in my previous question):

    During the training, the miners also evaluate experts with the inputs and the state of a neural network of a user, this user make payments for this evaluations.

    This is can be a little confusing: These cost of evaluations are measured in the number of connections of the neural network evaluated. To pay to the miner that evaluated it, it evaluate the same quantity of ANN connenctions for training and for paid evaluation. So the evaluations go inside the PoW.

    "To be honest.. "

    Thanks for your opinion and Thanks for even reading this man, you do not have idea how much people have laughed at my face at the idea (wich i don't care), mostly for a solo project. But people wich know about neuroevolution understand my objective, and when it is working i hope everybody will be more familiar with AI. I know the name of the project maybe was mistake, but i thought it would be easier to understand this way and to attract attention.

    Now my goals are:

    1- Taxonomy system : DONE.

    2- Ethereum modification (option 2) or interface to ethereum (option 1) to receive Singularity's PoW (two hashes) and pay for it to the miners: IN PROGRESS,

    3- Produce the PoW (2 hashes) to send to modified Ethereum or mine and comunicate with interface.: NEED goal 2

    The question remains the same, should i try the option 2 (Ethereum header modification) or there is a way (option 1) in wich i can use Ethereum to pay the miner generating a transaction after verifying the dataset+expert (efficiency).
    Post edited by harveybastidas on
  • harveybastidasharveybastidas Member Posts: 20
    edited August 2014
    Sorry for the lack of updates on the forum, but i was having a lot of work in my biotechnology open source/hardware project:


    But i have made some advancement in the taxonomy system. As i said before, it will be difficult and take some time. The project is now on GitHub


    I will inform on important updates speccially when i need to deal with the cryptocoin aspect of the application.

    More info:
  • tycho01tycho01 Member Posts: 1
    edited October 2014
    Hi @harveybastidas,

    I'd been thinking of a similar idea, but ended up on a stumbling block. In a blockchain-like setting, where all peers are to be able to verify proposed solutions, how would you go about preventing overfit in the domain of machine learning?

    In other words, let's say you specify a input/output verification set of "1->1, 2->4, 3->9"; how will you prevent people from cheating the system with solutions like "case x: 1->1, 2->4, 3->9" rather than the intended "f(x): x^2"?

    The way machine learning competition website Kaggle goes about solving this problem is by only making part of their verification set public. In the case of a blockchain system like Ethereum, relying on any central party is not actually possible though, is it?

    Moreover, in the Ethereum development tutorial machine learning is explicitly mentioned as an example of a bad fit for the Ethereum system, apparently because the computational power in the system would be "like that of a 1999 smartphone".

    Would there be any way to address these issues? If so, I'd much interested...
  • harveybcharveybc Cali, ColombiaMember Posts: 9
    edited October 2014
    Hello @tycho01‌

    **About the overfitting:

    There are several ways to avoid overfitting in machine Learning, almost all of them related to the selected machine learning tecnique and the training datasets used to train and verify the expert like the approach you mention of using a training dataset different from the validation dataset. Please look the following article as an example:


    The idea is to have a taxonomy of expert categories and inside each category, several experts that differ in the training method. As there can be many experts for a category, some of them overfitted, the approach may be to use a user rating system for the experts, with coin mined having this as a factor. So the top voted/used experts will be probably not overfited and will pay more if their efficiency increases. This is a big issue, and i am trying to search a solution to this and a lot of other problems like the ones mentioned by @jasper but as you can see in github, Singularity is by no means Complete and i am having fun experimenting with it in my free time.

    **About the Ethereum computational power:

    What i am trying to do as mentioned in the first post of this thread is to modify the original bitcoin header, but i haven´t even started with this, the basic approach is shown in the comments of the file:


    Sorry for the early state of the file, the taxonomy is still not implemented in the ledger layer of the header (hash_previous_block), the types used are not definitive.

    When i have that implemented (may take a long while), and had make some minimal descentralized training tests, i will examine if it is possible to modify the ethereum block header or use something similar to allow transaction programming in the block header of singularity.

    I am pretty busy right now with my two biotech hardware projects (gasotransmitters and nanocatalysis of CO2 to ethanol), but i will try to make some advance in Singularity when i have some time.

    My goal is to make a descentralized AI training platform based on the PoW concept. The transaction programming (like Ethereum) is a plus if feasible.

  • JasperJasper Eindhoven, the NetherlandsMember Posts: 514 ✭✭✭
    Its probably just that i dont know the topic well enough to follow the idea in progress.

    That said, it is hard to incentive align.
    1. If people get to submit problems, it should be hard to submit specific problems specifically created so that the submitter has an advantage in finding the solution. Or it should cost the submitter too much.
    2. Aforementioned risk of people finding much better algos to find experts, but not sharing them.
    3. Lets say you find a submittable expert. It would hurt efficiency or break it entirely if it is useful to 'hurt' the expert a little to get a reward for the expert now, and then later fix the ways it was 'hurt' to get a reward for it again for being a bit better than before.
    4. In submitting an expert, you dont prematurely release the information, before the block your solution is secure. This might allow others to also construct a solution with the same/larger difficulty.(efficiency) For instance you could do this by submitting the hash of a solution first and then only later the actual solution.

      With PoW/PoS this is not an issue, because you can just use the address of the block winner to give everyone a different problem.
    What do you imagine the transactions on there to be used for. For instance, miners get Singularity coins and they're main to buy submitting problems for the experts again. Some of the above depends on that. I.e. if you always pay more for submitting problems than is issued to solutions, solving your own problems cant be profitable.(other than that the solution is useful to yourself) Of course in principle, you could use the coins for lots of things, have Ethereum contracts, even..

    One idea i came across, perhaps you might have too, is to use a VM compress data instead of learning. I.e. for instance, you could make a merkle tree ending at hashes of(parts of) (uncompressed)wikipedia articles and their length in bytes, people could verify this merkle tree before the network accepts solutions. Solutions are 'experts' that output a wikipedia article, but are much smaller than that byte-wise.(and preferably dont take too much cpu time, score could depend on both, latter has to be measured entirely consistently across different clients, i.e. gas-like)

    Creating a block with a solution solves the data availability problem, as nodes only accept blocks if they can access them. Nodes can know availability of data, Ethereum contracts do not.(unless you submit all the data on there...) That said, there may be workarounds, for instance you could implement an SPV client in Ethereum, and use difficulty as proxy. But then an SPV client for what you're proposing might not be lightweight enough.
    minor: in FractalMachine.cpp says '7', but its instruction 8
  • harveybcharveybc Cali, ColombiaMember Posts: 9
    edited October 2014
    Hi @jasper , the transactions will be used exactly as the bitcoint transactions, the only change i want to do is the PoW and a the respective modification to the coinbase transactions. I will try to answer your 4 questions when i advance more in the design latter in this year.

    Forgive me for the lack of answers and progress these days, but my very dear uncle Homero Bastidas, a honest potato and cow farmer like myself has been kidnapped by the ELN (Ejercito de Liberación Nacional) guerrilla in the way from Túquerres (my hometown) to his farm near Samaniego Nariño, Colombia, from the reports of witnesses he is hurt from resisting the kidnapping. My family is destroyed and i am very unmotivated plus we all have fear for our own security, you know that kidnappings in my country last for several years so we are very sad.

    I will try to continue advancing in my projects, but is difficult, i hope someday my projects help to finish all of this Sht we are living specially in my country.
  • JasperJasper Eindhoven, the NetherlandsMember Posts: 514 ✭✭✭
    Damn that sounds tough deal with.. Hope your uncle will is alright.
Sign In or Register to comment.