-
Notifications
You must be signed in to change notification settings - Fork 2
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
State of AI at the dawn of 2021 - A critical review #7
Comments
Thank you for writing the blog post, I was finally able to read it from my backlog, and I enjoyed it. |
Hi Daniel,
Glad you enjoyed reading the post. You raise some interesting points. We
see the nature of NARS as a tool user in the same way that humans are. Just
as we enhance our own 'capabilities' with software 'prostheses', NARS can
surely do the same. So the question becomes one of where do you form the
boundary between SELF and other? It appears more obvious in our own (human)
case as there is a physical separation between SELF and tools but this is
already starting to blur and will become more difficult to separate as we
embody further technology into ourselves.
One could think of NARS as a hybrid solution but this is not, in my
opinion, the correct way to look at NARS. The 'core' of NARS is based on a
single unified principle of cognition. This forms the basis for the
development of SELF within NARS and is quite separate from, say, plugin
software tools such as YOLO. Just as it will become more difficult to draw
a line between ourselves and future technologies, so it will be with NARS.
Yet at the heart of the system there will be a core that remains the basis
for SELF.
In summary I agree that NARS can become more 'capable' given access to a
range of software tools. The question that is more difficult to answer is:
can NARS become more 'intelligent' with access to additional tools. Clearly
in the case of humans, tool usage doesn't make us more intelligent but
rather more capable. In some ways I see this as one of the major issues
humanity faces: As a species we are not intelligent enough to handle our
developed capabilities. Given the theoretical constraints of NARS, AIKR and
a form of bounded rationality, it will be more human-like than a fully
rational oracle (if that is even possible). NARS will make similar errors
and be prone to the same type of 'thinking' that humans have. The only
proof of concept for 'high level' intelligence, that we are aware of, is
ourselves. The science fiction idea of AGI, whilst highly entertaining, is
currently just that: fiction. Who knows what the future will bring but
right now the pinnacle of intelligence is ourselves and I have seen nothing
that would show that intelligence, as we define it, can come about any
other way than through a similar approach to our own.
I have no doubt that there will be many who disagree with this position and
it will be interesting to see the different views.
Regards
…On Thu, 1 Apr 2021 at 06:32, Daniel Jue ***@***.***> wrote:
Thank you for writing the blog post, I was finally able to read it from my
backlog, and I enjoyed it.
I think it would be interesting to reflect on the notion of sacrifice that
we humans have to endure, compared to AGI, in terms of narrowing our
potentiality as children into focused skills in early adulthood--we give up
the possibility of being anything so that we can become something. With an
AGI's existence and scales of time being so different than ours, should
there be any reason for it to not achieve the same performance of a
specialized narrow AI, given the same or even less training? For one, an
AGI's transfer of generalized knowledge may jumpstart the performance in a
yet unseen task, and if the context of tasks is defined well, then a
localized set of hyperparameters could add the nuance of improvement needed
to meet specialized implementations, with less training examples. Secondly,
more could be done to wield an acquired narrow intelligence (trained or
otherwise) as a black-box tool, the way we use tools of all sorts, or
attach them as sensory prostheses such as your YOLO4 example.
—
You are receiving this because you are subscribed to this thread.
Reply to this email directly, view it on GitHub
<#7 (comment)>,
or unsubscribe
<https://github.com/notifications/unsubscribe-auth/ACQPLRZ3VWWJ4GY3DKMHATLTGQAOTANCNFSM4VSIAQPQ>
.
--
*Tony Lofthouse*
Founder, Reasoning Systems Ltd
+44 (0)7342 230929
|
Feel free to leave comments about the Blog post.
Other ways to reach us:
Real-time team chat: #nars IRC channel @ freenode.net, #nars:matrix.org (accessible via Riot.im)
Google discussion group: https://groups.google.com/forum/#!forum/open-nars
Facebook page: https://www.facebook.com/opennars/
Homepage: www.opennars.org
Team website: https://cis.temple.edu/tagit/
The text was updated successfully, but these errors were encountered: