distributed operating system with runtime component compiling
Hiveware is a new kind of coöperative software development and execution platform based on natural language formation. Hiveware relates social networking to User Generated Content categories without the need for servers or file/folder artifacts. Hiveware is related to ontology and semantic mapping, with one big distinction: Hiveware deals with the creation and organization of content per se and not content put into files and folders. The Hiveware engine is in reality a distributed operating system. The technical description of Hiveware is synchronous groupware with evolving overlay. The Hiveware technology is a cognitive technology. Instead of trying to duplicate the human thinking capability (i.e., robots), Hiveware augments and extends human thinking with the use of the digital computer. This approach lets humans and computers work coöperatively instead of competively together with digital-thinking supplementing and complementing bio-thinking.
Hiveware stands for Hyperstructured Interactive Virtual Environment softWare and integrates the development and running of software, as well as who and where content contributions are made. Like natural language, Hiveware is a federated, yet synchronous, architecture and as such, Hiveware activity is subjugated an evolving structure that both guides and coördinates the individual contributor.
Today, software development and execution is stuck in the client/server model. Typically, a user first thinks of a question or a query. He or she then steps up to the computer and implicitly asks it or requests a result of some sort (e.g., Are there national parks in Russia? or give me the movie “Gone with the Wind”). The server software, concentrated in a few company’s enormous server farms, is implicitly charged with determining that answer and returning the result in real-time solely based on the user’s naked query. That is not the way natural language works. For natural language all utterances and text are expressed from a context of fellow language contributors, which is called context. Computers can’t understand that context, which has been an unsolved Computer Science problem for 50 years. Querying a server with this or that question avoids the issue of context. Consequently, Computer Science, imprisoned under the current model, has not progressed.
Hiveware addresses the context issue head-on by associating desktop and mobile computers in an elaborate peer-to-peer network (P2P) that exactly matches the real conceptual structure of the involved contributors, or Hiveware-to-Hiveware(H2H). Making this possible is SGML, or Standard Generalized Markup Language, an ISO standard from 1986. SGML makes the subtle leap between human language and machine process-able code. The result is Hiveware, the first credible implementation of SGML.
For computer scientists: the Hiveware engine is a new type of compiler. In contrast to today’s compilers
which are sequential, the hiveware engine is long-running meaning it compiles a parse tree and instead of throwing it away as todays compilers do, it keeps it. Populating a new hive member’s hive merely makes a copy of that parse tree for him. And here is the key to understanding it: any change in the system by anyone, be it content or context (i.e., structure), is treated as a parse-tree fixup which gets propagated. The result is network speed parse-tree corrections to the hive member’s parse trees. The language Hiveware parses is SGML which, as mentioned, has a subtle connection to natural language, which in turn means Hiveware is natural language made tractable on computers. Thus computers become extensions of human natural language processing instead of replacements of them like artificial intelligence attempts do.
Many industries exist whose sole purpose is to make copies of your data in case you experience catastrophic failure of some kind. It should be clear now that making copies of your data outside of the control of Hiveware breaks the Hiveware benefits of ownership, security and privacy.
The Hiveware engine uses three rules to preserve ones data without losing control:
Keep at least three copies of your data
Keep the backed-up data on two different storage types,
Keep at least one copy of the data offsite. A Hiveware copy is still always in cryptographic possession of its owner regardless of its location.
No backup strategy is perfect, but Hiveware makes it easy to change the odds of devastating data destruction. Let’s say Hiveware's 3-2-1 strategy turns out to only be on par with mean time between failures of hard drives (MTBF). That not being good enough today, computer users have to resort to cloud-based backup services. Hiveware could without architectural or design change decide to update the system to use a 4-3-2-1 which could increase everyone's data safety by an order of magnitude (not yet measured or calculated).
hiveware user authentication (identity)
If the CIA wants to hire someone, they will need security clearance. To get security clearance, which sometimes takes up to 6 months, they have to interview the new hire's neighbors and relatives. It is this set of familial and historical relations that authenticates the new hire.
Built-in app Hiveware for MyFamily is designed to be the online version of this identity-establishing process. Obviously, a Hiveware version of this resource-consuming process which security agencies have to go through would be more efficient by many orders of magnitude. And the online process would be continual instead of one-time that has to be renewed periodically. For example, let's say that one of the security questions is, "Have any of your relatives ever joined a terrorist group?" But say one does so sometime in the future, then if this relative gets flagged somewhere, then a notification (if that is the way it is set up) might get sent (pushed) to the security clearance authorities advising them of this.
Artificial Intelligence has made it possible to spoof someone visually. Someone who does this becomes a digital imposter or deep fake. Securing hot connections to ones acquaintances, family and organization members protects against this.
For everyone: Hiveware will change the following technologies:
data mining – for years leading edge software companies have engaged their top personnel in contriving ways to mine after-the-fact information from stagnent data sources, be they books, articles, newspapers or web pages. The number of web crawlers is already legion. The problem with data mining is the activity is based on a false Language Psychological premise: that one can detect with certainty what someone else meant when he wrote or said something about the world. Meaning is something that exists in the mind of the meaning creator, that is, the author of the expression. To capture what the true meaning is, you have to ask the author. Data mining is just guessing. Hiveware® maintains in hives connections to the authors of their expressions and thus preserves contextual meaning.
cut-and-paste practice – most industries today use the cut-and-paste method of working in groups and many use Word 2003-vintage software to do it. Professionals work in groups, make their contributions and the most senior member of the group or the one with the most Word skills defaults to the one who cut-and-pastes the drafts together. Each draft disrupts the group’s author participants as they were used to the old draft or had even printed it off with comments on the pages. Hiveware® lets the group’s members move together as changes are made and they never have to leave the document or replace it with the next draft.
– OSs still use the kitchen-sink approach regarding their architectures. Added functionality still comes in the form of applications on top of the operating system and the OS itself is more and more elaborate and vulnerable. Counter-security measures have often resulted in added inconvenience to users and false positives that accuse innocent actions. Theoretically, each PC site only needs the ability to connect to the outside world, a CPU and some RAM. It doesn’t even need a hard disk. Once populated with the basic TCP/IP transporter mechanism (a.k.a, the Hiveware® engine), it can begin populating itself with the necessary and sufficient OS tools it needs to do that hive’s job. And no more. The “op sys” for any particular hive node becomes the intersection (not the union of) of whatever the traditional OS functions needed to run that site’s hives and the hive’s own functionality. Take, for example, VMWare and Citrix: these technologies have the singular purpose of transporting a user virtually to the location of a remote PC’s desktop. How ironic and simplistic that the remaining PC’s OS has been reduced to teletype (TTY) functionality, not to mention the enormous power of its CPU, which is also untapped. Hiveware® simply replicates the data being work on by groups and computer-assists in controlling who and what can be changed in a mutually-exclusive manner and along the categorization boundaries of natural language grammar as made tractable by SGML. All PC’s CPUs and RAM are fully used without rendering useless/unused most of the OS or transporting the user elsewhere.
context searching – web search today is data-metric, that is, statistical. It data mines and surmises meaning. Google spends millions of dollars refining its search engine to do just that and does it better than any other search engine. But it is not context searching, which is the holy grail of search. For example, Google still can’t discern if a search for bill has to do with a restaurant, a governmental body or a part of a bird. Hiveware® does this automatically. Eventually as the number of hives grows, a context search for bill would begin by the searcher travelling like a web crawler down and around the permitting hives until he gets to the meaning group and authors that, on an ongoing basis, defines what he is looking for.
load balancing – solving a balancing task would be different from creating a Word template/outline change. The concept would be identified, and a code behavior would be developed for the task. Let’s say a music group has a new video and song they want to send out to their subscribers all over the world. The grammar might be, MusicalGroup: ( NewMusic Video | NewTune ), PushToCloserNetwork*. The next piece of grammar might be: PushToCloserNetwork : SendToMusicGroupSubscriber | RelayContentToCloserNetwork. The point is, the task of the distributed hive would be to balance and distribute the content to the subscribers.
data security – There are no usernames or passwords in Hiveware in that Hiveware uses public/private key encryption (PKI). PKI signing is a far superior authentication method than today’s simplistic username/password challenge.
For an observer to obtain his status as a member of a particular node, he must receive a cryptographically signed email from another member. This each-one-signup-one establishes a secure chain-of-trust not dependent on today’s compromised root authorities like VeriSign, COMODO, etc. The new member goes through a signup process which establishes an informed consent for the author to push data to his machine(s) much like an RSS feed today. What is different is that this push feed is just one node inside a tree of other nodes that make up the parse tree (computer science)/document structure/target work/context, etc. In addition, each message that is pushed is encrypted. Other levels of security are Hiveware’s RansomWare immunity, DDoS immunity and IPv6 SLAAC address-generating obscurity. Because your data is automatically replicated, RansomWare cannot encrypt all your data. Because there are no servers, DDoS attacks are by design impossible. Because IPv6 SLAAC addresses are generated by the user’s Hiveware nodes, they are inherently private. Because your digital assets are each protected by their own private key that never leaves its creator's possession, your digital assets are immune to 'deep fake' forgery. Obviously, if there is no server, then that cannot be a target for hackers. Lastly, each node in a target work runs on a separate thread and maintains its own TCP/IP endpoints. An endpoint is a network address like 220.127.116.11 or fe80::9166:700c:c2fb:3e62, and a port number like 35258. IPv6 can put an address on anything and finding that address by a black hat interloper by random search is close to impossible. Thus there is security by obscurity. Using SLAAC which generates IPv6 addresses on the fly eliminates the ISP from having to be a part of the chain-of-trust.
The cost of added PKI encryption security on messages is mitigated by the fact that key generation time is less critical for Hiveware precisely because the psychological time of waiting for the key to be generated is not placed exactly at the wrong point in time, as is the case for client-server. Hiveware message key generation occurs when the observer is, so to speak, not looking. The default encryption bit-length is 1024 bits, but this can be set as a preference. Longer keys take longer to generate, but of course they are more secure.
“Natural speech reveals the semantic maps that tile human cerebral cortex”
“The meaning of language is represented in regions of the cerebral cortex collectively known as the ‘semantic system’. However, little of the semantic system has been mapped comprehensively, and the semantic selectivity of most regions is unknown. Here we systematically map semantic selectivity across the cortex using voxel-wise modelling of functional MRI (fMRI) data collected while subjects listened to hours of narrative stories. We show that the semantic system is organized into intricate patterns that seem to be consistent across individuals. We then use a novel generative model to create a detailed semantic atlas. Our results suggest that most areas within the semantic system represent information about specific semantic domains, or groups of related concepts, and our atlas shows which domains are represented in each area. This study demonstrates that data-driven methods—commonplace in studies of human neuroanatomy and functional connectivity—provide a powerful and efficient means for mapping functional representations in the brain.” by Alexander G. Huth, Wendy A. de Heer, Thomas L. Griffiths, Frédéric E. Theunissen and Jack L. Gallant.
This paper elucidates the neural encoding aspects of Homo Sapiens’ brain. Hiveware augments this virtual reality by the use of the reified SGML map. This architecture stands in stark contrast to the brain-within-a-computer and robotics developments which seek to replicate the brain using the Von Neumann digital computer or neural network.
hiveware explanation by topic
What is an anonymous mirror?
A Hiveware mirror is a running hiveware app that is an exact copy of your current running app. It is an exact copy because each time you make a change in your current app, the same change is made to its mirror app. Traditionally, this activity has been called Backup, but traditional backup has several flaws: it is file/folder-based and it has to send enormous quantities of your data across the insecure Internet to somewhere else. You lose ownership of your data. A file does not represent the structure of your data. Consequently, retrieving a backup file may let you obtain your currently lost data, but it may also erroneously back-date previous changes, thus giving you a mixed erroneous copy of your data. What makes a copy anonymous is, the Hiveware algorithm chooses which device from among all or any of available devices without your, or a potential hacker's, knowledge of where the mirror is geographically located. What was once an Internet liability - copying of files - becomes a security strength.
Three Steps to become a Hiveware® for <your expertise area> owner and entrepreneur:
Pick out a domain area that you have experience in and have entrepreneaurial interest in.