Re: Creation of information
Posted: Mon Nov 04, 2019 2:27 pm
If you will all permit me to redirect the conversation a bit in the hope of refocusing the issue Nils is not seeing (to the extent I understand his point of view).DBowling wrote: ↑Mon Nov 04, 2019 3:40 amI don't see how you can defend the premise that information produced by the computer program can't be traced back to the source of the program and algorithms that produced it.Nils wrote: ↑Mon Nov 04, 2019 1:17 amI don't "arbitrarily choose to ignore the intelligence generated algorithms". I just notice that they don't contain the piece of information we are talking about and that implies that it isn't possible to trace the information to them. I don't understand how you can defend the idea that you can trace something to a place where you know this something isn't.DBowling wrote: ↑Sun Nov 03, 2019 3:52 pmAbsolutely...Nils wrote: ↑Sun Nov 03, 2019 2:17 pmYou think that the piece of information "LRRL" shall be traced backwards to processes that don't contain the information "LRRL".
You just chose to arbitrarily stop at the program execution, even though the program execution itself represents the algorithms and design of an intelligent mind.
Especially when the information that is generated is output from a process that is driven by algorithms developed by an intelligent mind.
You can't arbitrarily choose to ignore the intelligence generated algorithms that are the critical component of the program and then claim that the information cannot be traced back to those algorithms.
The input that the computer processes may be unknown to the programmer.
But the algorithms and program that process the input to generate the new information were designed by the programmer.
To me you are trying to defend a position that is obviously fallacious.
The algorithms that process the input to generate the information can be traced back to an intelligent mind.
This shouldn't be difficult to understand.
Nils, it seems to me there are two facets you're not considering in your computer algorithm scenario, one DB and I alluded to before and another I would also like you to consider.
- The first one is when a random mutation of the algorithm occurs at indeterminate intervals. Every so often, a single bit is switched on or off in the algorithm. Please describe what you think will happen after N mutations.
- The second has to do with human replication of the algorithm. Any algorithm, no matter how simple or complex, can be replicated by the human mind. Granted, it may take much more time, perhaps even centuries, which makes it time-prohibitive. And of course it is prone to errors. But that's not the point at all. The point is, since an intelligent mind built the algorithm, an intelligent mind can follow the same instructions, with pen and paper let's say, until the same solution is arrived at. Granted, the computer can do it much faster (precisely what it was designed to do). My question to you is, not can a computer follow a set of instructions much faster than the human mind, of course it can. Rather, can a computer write an algorithm to solve a given problem without having been given the set of instructions in the first place?