@DETio said: VirtEngine resolves these limitations, thus the use-cases are the pain points caused from the above limitations.
I get where you're coming from. But what I'm specifically saying is that this just seems like a laundry list of "ok how can we use what we made" instead of "ok this is the problem we're trying to tackle and turns out this is the best way to do that job".
Also, some of these are just plain wrong.
@DETio said: (9) There are no publicly commercially available Supercomputers that can be rented by anyone, the only available systems are reserved for scientists and researchers such as Summit, Sierra and Frontera supercomputers.
This is incorrect. There are publicly commercially available HPC clusters people can get access to. I am a former modeler and consumer of HPC solutions and have used some of the public services as research capacity or model building capacity.
@DETio said: (12) Blockchain Proof of Work networks such as Bitcoin are inherently extremely wasteful on computing capacity.
This is just a fact. This isn't a usecase.
Like... I could be nit-picky about every single one of these usecases but what this list shows me is it's just a list of "how could this tech be used" not "is this tech actually effective on this". Of which my first reaction when reading this document is... No. There are better already more established ways of approaching this.
@HalfEatenPie said:
I've looked at it but it seems to be a solution that's trying to find a problem. I still have yet to see a valid problem or a rationale for this technology or solution. Not shittalking just trying to see why I need to look deeper into this.
Can you please clarify what you are referring to exactly?
Provide real world use cases.
Pulled from VirtEngine Protocol paper pages 3+4
Current Limitations with Background Techniques used in these technologies:
(1) Traditional Cloud Computing relies on Centralized Entities to build and manage their
Cloud Computing services, resulting in a Single Point of failure and risk of data
loss/downtime.
(2) Decentralized Systems do not provide a method for Identifying and Verifying users
for KYC (Know Your Customer) regulations required by most government agencies
when dealing with finance.
(3) Decentralized Computing through Blockchain does not offer access to HPC, instead
relies on users to implement custom code such as Smart Contracts to access
compute power offered by Blockchains.
(4) Distributed Computing is dependent on centralized entities to manage and deploy
the network.
(5) Supercomputers rely on Centralized entities to deploy and manage infrastructure.
(6) Distributed Computing networks are limited by network interconnection between
nodes thus making them unsuitable for certain real-time use cases.
7) Supercomputers are limited by physical infrastructure constraints, however, provide
higher network interconnection between compute nodes compared to Distributed
Computing.
(8) There are no publicly commercially available Distributed Computing systems,
Folding@Home is an example of an existing Distributed Computing system however
access is limited to specific Scientists and Research Organizations within the Protein
Folding space.
(9) There are no publicly commercially available Supercomputers that can be rented by
anyone, the only available systems are reserved for scientists and researchers such
as Summit, Sierra and Frontera supercomputers.
(10) Supercomputers require tremendous CAPEX funds to implement and deploy.
(11) Cloud Computing can provide a limited alternative to supercomputer
capabilities, is limited by the amount of hardware that can be deployed and
dedicated for HPC tasks.
(12) Blockchain Proof of Work networks such as Bitcoin are inherently extremely
wasteful on computing capacity.
(13) Cloud Computing services can be underutilized, thus leading to wasted
computing capacity due to the requirements of providers needing additional
capacity implemented to deal with surges in demand.
(14) Consumer compute devices such as mobile and PCs computing capacities are
often underutilized as devices are not always actively used to their full computing
extent.
VirtEngine resolves these limitations, thus the use-cases are the pain points caused from the above limitations.
"Blockchain Proof of Work networks such as Bitcoin are inherently extremely
wasteful on computing capacity." - was already solved with PoS.
My real advice (2 cents) is to make everything more readable with normal people words and a better selling point.
People aren't consuming what you have to say, because they aren't opening it.
@HalfEatenPie said:
I've looked at it but it seems to be a solution that's trying to find a problem. I still have yet to see a valid problem or a rationale for this technology or solution. Not shittalking just trying to see why I need to look deeper into this.
Can you please clarify what you are referring to exactly?
Provide real world use cases.
Pulled from VirtEngine Protocol paper pages 3+4
Current Limitations with Background Techniques used in these technologies:
(1) Traditional Cloud Computing relies on Centralized Entities to build and manage their
Cloud Computing services, resulting in a Single Point of failure and risk of data
loss/downtime.
(2) Decentralized Systems do not provide a method for Identifying and Verifying users
for KYC (Know Your Customer) regulations required by most government agencies
when dealing with finance.
(3) Decentralized Computing through Blockchain does not offer access to HPC, instead
relies on users to implement custom code such as Smart Contracts to access
compute power offered by Blockchains.
(4) Distributed Computing is dependent on centralized entities to manage and deploy
the network.
(5) Supercomputers rely on Centralized entities to deploy and manage infrastructure.
(6) Distributed Computing networks are limited by network interconnection between
nodes thus making them unsuitable for certain real-time use cases.
7) Supercomputers are limited by physical infrastructure constraints, however, provide
higher network interconnection between compute nodes compared to Distributed
Computing.
(8) There are no publicly commercially available Distributed Computing systems,
Folding@Home is an example of an existing Distributed Computing system however
access is limited to specific Scientists and Research Organizations within the Protein
Folding space.
(9) There are no publicly commercially available Supercomputers that can be rented by
anyone, the only available systems are reserved for scientists and researchers such
as Summit, Sierra and Frontera supercomputers.
(10) Supercomputers require tremendous CAPEX funds to implement and deploy.
(11) Cloud Computing can provide a limited alternative to supercomputer
capabilities, is limited by the amount of hardware that can be deployed and
dedicated for HPC tasks.
(12) Blockchain Proof of Work networks such as Bitcoin are inherently extremely
wasteful on computing capacity.
(13) Cloud Computing services can be underutilized, thus leading to wasted
computing capacity due to the requirements of providers needing additional
capacity implemented to deal with surges in demand.
(14) Consumer compute devices such as mobile and PCs computing capacities are
often underutilized as devices are not always actively used to their full computing
extent.
VirtEngine resolves these limitations, thus the use-cases are the pain points caused from the above limitations.
"Blockchain Proof of Work networks such as Bitcoin are inherently extremely
wasteful on computing capacity." - was already solved with PoS.
My real advice (2 cents) is to make everything more readable with normal people words and a better selling point.
People aren't consuming what you have to say, because they aren't opening it.
It really isn't. I'm pro-PoW I will admit and see value in both consensus approaches, but outside the engineering it comes down to ideology. Neither approach replaces the other technically but they always have a ps3/xbox fanboy fight.
The crux of it though is PoW is wasteful intentionally as any efficiency gains changes the game theory such bad actors will have less obstacles to gain something illegitimately, and thus efficiency broadly equals centralization.
PoS is more efficient technically but removes using energy as a form of game theory security and replaces it with stock-like ownership, which while it might sound good that users own it, the result is 1. big fish will end up in control since money = power in this setup, and 2 = you always have to start centralized and use VC or token raises vs mining which can act like BTC from the start.
The same issues with efficiency and game theory could be compared to anti-abuse systems on servers or web services BTW. The more strict you are, the more inefficient/frustrating you make it on the end user, but the more secure the system.
I think the complexity of it (especially in further iterations) would require deep learning.
But we shall see.
No, we won't. They already model the protocols under design before implementing. You don't get good AI without data first. You don't fully implement something like that without the protocol being worked out first. Anyway...
@SirFoxy said: My real advice (2 cents) is to make everything more readable with normal people words and a better selling point.
>
I recommend reading on the Sovereign Industrial Capabilities Priority of Australia, this system matches all their capabilitiy requirements and is associated with the Federal Government of Australia.
@TimboJones said: No, we won't. They already model the protocols under design before implementing. You don't get good AI without data first. You don't fully implement something like that without the protocol being worked out first. Anyway...
>
You are correct, not only is this an RFC -> But it is a patent protected RFC -> Thus setting new standards for Computing and AGI and also locking it up at the same time.
Comments
I get where you're coming from. But what I'm specifically saying is that this just seems like a laundry list of "ok how can we use what we made" instead of "ok this is the problem we're trying to tackle and turns out this is the best way to do that job".
Also, some of these are just plain wrong.
This is incorrect. There are publicly commercially available HPC clusters people can get access to. I am a former modeler and consumer of HPC solutions and have used some of the public services as research capacity or model building capacity.
This is just a fact. This isn't a usecase.
Like... I could be nit-picky about every single one of these usecases but what this list shows me is it's just a list of "how could this tech be used" not "is this tech actually effective on this". Of which my first reaction when reading this document is... No. There are better already more established ways of approaching this.
"Blockchain Proof of Work networks such as Bitcoin are inherently extremely
wasteful on computing capacity." - was already solved with PoS.
My real advice (2 cents) is to make everything more readable with normal people words and a better selling point.
People aren't consuming what you have to say, because they aren't opening it.
The entire thing feels just a bit too forced.
It really isn't. I'm pro-PoW I will admit and see value in both consensus approaches, but outside the engineering it comes down to ideology. Neither approach replaces the other technically but they always have a ps3/xbox fanboy fight.
The crux of it though is PoW is wasteful intentionally as any efficiency gains changes the game theory such bad actors will have less obstacles to gain something illegitimately, and thus efficiency broadly equals centralization.
PoS is more efficient technically but removes using energy as a form of game theory security and replaces it with stock-like ownership, which while it might sound good that users own it, the result is 1. big fish will end up in control since money = power in this setup, and 2 = you always have to start centralized and use VC or token raises vs mining which can act like BTC from the start.
The same issues with efficiency and game theory could be compared to anti-abuse systems on servers or web services BTW. The more strict you are, the more inefficient/frustrating you make it on the end user, but the more secure the system.
No, we won't. They already model the protocols under design before implementing. You don't get good AI without data first. You don't fully implement something like that without the protocol being worked out first. Anyway...
{ CONFIDENTIAL INFORMATION RELEASED }
>
I recommend reading on the Sovereign Industrial Capabilities Priority of Australia, this system matches all their capabilitiy requirements and is associated with the Federal Government of Australia.
>
You are correct, not only is this an RFC -> But it is a patent protected RFC -> Thus setting new standards for Computing and AGI and also locking it up at the same time.