Becoming Cosmically Informed and Cosmically Aligned
People often read about the worthy successor, or generally axiologically cosmic ideas, and ask: “Okay, Dan, but how to I actually DO something about this?” As it turns out there…
Read the Dan Faggella latest coverage on
AI use-cases and trends in AI sector.
People often read about the worthy successor, or generally axiologically cosmic ideas, and ask: “Okay, Dan, but how to I actually DO something about this?” As it turns out there…
Like it or not we humans share the fate of all forms (individuals, species, substrates): To transform or be destroyed. I suspect we have from 15 to 40 years before…
It’s no wonder all the money is flooding into AGI. It will be no mystery when even more of the money in the world is hurled into explicit building or…
Richard Dawkins’ concept of the “extended phenotype“: Genes influence not only an organism’s physical traits (the traditional phenotype) but also its environment and the behavior of other organisms, extending the…
Today, “life” is synonymous with biology. But it may be relatively soon that cyborg entities and AGIs may be able to extend the boundaries of what “life” means. If the…
Imagine a future where a Worthy Successor AGI exists. This is an entity that can continuously expand potentia, is presumably sentient, and is already mostly concerned with achieving goals beyond…
There are people who imagine a future in the year 4000 which is nearly identical to 2025 but with robot butlers, travel to These people believe, understandably, that humans do…
In 2025, it is no longer intellectually honest to completely shun the idea of artificial general intelligence (AGI) or AGI risk. Yet still, in Dec 2024 (the time of this…
AI alignment typically implies anthropocentric goals: “Ensuring that AGI, no matter how powerful, will serve the interests and intentions of humans, remaining always under our control.” – or – “Ensuring…
Assuming AGI is achievable (and many, many of its former detractors believe it is) – what should be its purpose? I argue that the great (and ultimately, only) moral aim…