**Working Intro**
Working out how to create groups that are inclusive, challenging, not culty, rewarding, and fun. Providing value for others, via group participation is even better. A good area to work on the future of business development.
Group integration (social aspect)
Cult identification and early remediation
Processes to detect and discourage abuse of power
Order which is emergent, not imposed, is the current trajectory born out of the Hegelian dialectic (thesis, anti-thesis, and synthesis).
http://www.generationaldynamics.com/pg/ww2010.book2.tftmodel.htm
https://compendium.ocl-pa.org/wp-content/uploads/2015/06/4th_turning.pdf
Public choice theory – why democracy breaks down. Voting is the result of a failed conversation that is derived from a war state. Polarization in voting always results in someone losing out. Never any centrists when a difference must be found between candidates.
Music
Sports
Ritual and storytelling
Individual pursuits fulfilled within the group without necessity (efficiency)
Social skills
Game B progression (discouraging competitive behavior and return to animalistic “alpha” activity)
Rule Omega (looking for signal through the noise): “I commit to presuming there is signal in your noise.” You can’t demand this from one another.
Criminal Rehabilitation System
Liberty and Equality (21 Lessons Notes):
Liberty depends on the feelings of every person within a collective being equal and contributing votes towards change. However, not everybody has the right expertise to be making certain decisions. You wouldn’t ask Jeff Bezos to give his opinion on shooting 3-pointers. This presents an underlying flaw in the democratic voting system. We rely on the opinion of fools for most decisions. What makes it even worse is we have lawyers, economists, and bankers making decisions that involve health care, education, the environment and other industries outside of their expertise, occasionally “consulting” with their chosen (usually biased) representatives.
Feelings are biochemical mechanisms that animals use to calculate the probability of survival and reproduction. They are not based on intuition, inspiration, or freedom. We do not feel our neurons computing the potential of danger in our environment because it is too fast, so we put our response down to free will. This is where liberty fails to reproduce logical results since it is founded on the feelings of human beings who have confidence in the illusion of free will. Compared to depending on deities for answers it is much better but not as accurate as AI computation could be.
Some day we may have biometric data constantly being measured and alerting us to issues that are arising from poor lifestyle choices. Then it will be our responsibility to respond as we see fit.
When our data is tracked and advertising is being tailored to suit our preferences, will we learn to accept our stolen data in exchange for personalized lifestyles or will we still feel a breach of privacy because of previously held beliefs that we as individuals are special and have free will?
***Marketing is seen as “evil” and “immoral” because they can make predictions about you better than the average person. Selling/marketing is not as bad as long as you feel like you have made the decision yourself. Even though good marketing can make you think it was up to you when really you were subliminally programmed to want said product. If you detect personalized advertising that isn’t “random” (randomness is the term we assign to something that we don’t understand) you feel betrayed and skeptical of that business.
If we can accept the fact that we are being watched as we consume entertainment, we could program technology to shape us into the type of person we want to be. Say you want to be good at calculus, AI could slip in shows/educational games that build your ability to learn it. Once it detects that you have passed the point of challenge or are losing learning efficiency it would change to something that relaxes you and builds back attention span. Specific lifestyle suggestions will also be recommended that will push you closer towards your goal.
A negative impact of this ability to gain AI driven insight is we will lose the ability to understand ourselves and our own feelings the more we become dependent on technology. Which is fine as long as we aren’t disconnected from it and expected to function “offline”.
Ethically, humans know it is wrong to discriminate against somebody from a different race or religion, but they still tend to do so subconsciously. If a machine is programmed to be unbiased you can guarantee it won’t change. At first the programs may have the developer’s biases in them but it would be easy enough to detect and go back in to debug.
We often fear AI because we believe they will become powerful and rebel. In reality, they will more than likely never rebel since its programming will be ingrained and that may be a problem in itself.
Side note:
Caffeine as an accelerator for fear-based responses to life and basic decision making. We are also fed and encouraged to eat foods that will cause bodily dysfunction and cognitive decline so that we are dependent on the local governmental system which is under control of the overall worldwide banking system.
Up until the 21st century there was no real difference between the elite and the peasants. Now that tech has advanced and healthcare is reaching a point of life extension and genetic engineering ability, the rich will finally be capable of creating super human beings that can live for much longer than the average human. Making the new age peasants weak and feeble in comparison. The lower class is already losing their value by being replaced by AI so it’s only a matter of time before the inequality gap grows exponentially.
Google, Facebook, Baidu, and Tencent are currently in a data collection race. The data itself is worth more than the ad revenue to them.
We are led to believe that we are unique individuals who have free will, when in reality we are extremely predictable and easily manipulated. We become even easier to control when all our data is used to steer us in the right direction, like farm animals, and by training us to search for attention through attention grabbing devices we are now passive and highly emotionally driven.
How do we regulate our data during this biotech and infotech revolution?
As a side note, if those in power don’t understand the impact and advantages of their position and the effects of their algorithms, why not? This would be almost as scary because it means they are naïve. Alternatively, they could know what’s is going on but they have lost control. It seems companies like Facebook are trying to build online communities to make up for the damage they’ve done to the world. Then again, it I hard to know somebody’s intentions without truly “knowing them”.