Chapter 538
Although the ten-year-old girl seems a little unreliable, Fangzheng still gave her the body of the narrator girl—after all, it was just for maintenance, and judging from that dog, the technology of that world The ability is quite good, but the simple maintenance work should be no problem.
And Fang Zheng returned to his room, and began to analyze the program of the narrator's sister.
The reason why he intends to do this by himself instead of handing it over to Nimfu is that Fangzheng wants to use the narrator's sister's program to analyze and make some adjustments to the artificial AI. Moreover, he also hopes to see what level of artificial AI technology has been developed in other worlds. Although he does not mean to learn from it all, but stones from other mountains can be used to make jade.
"Hoshino Yumei..."
Looking at the file name displayed on the screen, Fangzheng fell into a long thought. The analysis process itself is not difficult. Fangzheng himself copied Nimfer's electronic intrusion ability, and he has been with Nimfer all this time. Learn this knowledge, so it doesn't take too much time to parse the program itself.
However, when Fangzheng disassembled Yumemi Hoshino's program core and decomposed its functions into lines of code, he suddenly thought of a very special problem.
Where is the danger of artificial AI? Having said that, is artificial intelligence really dangerous?
Taking this narrator girl as an example, Fang Zheng can easily find the underlying instruction codes of the three laws of robots in her program, and the relationship between these codes has also been proved to Fang Zheng. Not a living being, just a robot. Her every move, every frown and smile is controlled by the program. By analyzing the scene in front of her, she will make the highest priority action that she can choose.
To put it bluntly, in essence, what this girl does is no different from those working robots on the assembly line or NPCs in games. You choose actions, and it reacts based on those actions. Just like in many games, players can increase the value of kindness or malice based on their actions, and NPCs will react based on these accumulated data.
For example, it can be set that when the goodness value reaches a certain level, the NPC may make more excessive demands on the player, or it may be easier for the player to pass through a certain area. Conversely, if the malicious value reaches a certain level, then the NPC may be more likely to succumb to certain demands of the player, or prevent the player from entering certain areas.
But this has nothing to do with whether the NPC likes the player or not, because the data is set in this way, and they themselves do not have the ability to judge in this regard. In other words, if Fangzheng changed the range of this value, then people could see an NPC greet the evil players with a smile, but ignore the kind and honest players. This also has nothing to do with the NPC's moral values, because that's the data setting.
So, going back to the previous question, Fangzheng admitted that his first meeting with Yumemi Hoshino was quite dramatic, and the narrator robot girl was also very interesting.
So let's make an analogy, if when the narrator girl gave Fang Zheng a bouquet made of a lot of non-burnable garbage, Fang Zheng suddenly became angry, smashed the garbage bouquet into pieces, and then directly smashed the robot in front of him. The girl is cut in half, so what will be the reaction of this robot girl?
She will not cry, nor will she be angry. According to her programming, she will only apologize to Fangzheng, and think that her wrong actions caused the guests to be dissatisfied with her. Maybe she will ask Fangzheng to ask the staff to correct her. Make repairs.
If this scene is seen in the eyes of other people, of course they will feel sorry for the commentator girl, and think that Fang is an annoying bully.
So how did this difference come about?
In essence, this narrator robot is actually the same as automatic doors, escalators and other tools, and completes its own work by setting programs. If an automatic door malfunctions, it won't open when it's supposed to open, or it will snap shut when you walk past it. You don't think that automatic door is cute, you just want to open it quickly. If he couldn't open it, he might smash the broken door and walk away.
If this scene is seen in the eyes of other people, then they may think that this person is a bit rough, but they will not feel disgusted with what he did, let alone think that the other person is a bully.
There is only one reason, and that is interactivity and communication.
And this is also the biggest weakness of the living body --- emotional projection.
They project their emotions onto an object and expect a response from it. Why do people like to keep pets? Because pets respond to everything they do. For example, when you call a dog, it will run up to you and wag its tail. And a cat may just lie there and ignore you, but when you stroke it, it will still flick its tail, or some cute and cute ones will lick your hand.
But if you call a table, touch a nail, even if you are full of love, they will not give you the slightest response. Because they have no feedback on your emotional projection, they will naturally not be taken seriously.
In the same way, if you have a TV, and one day you want to replace it with a new one, then you will not have any hesitation. Maybe the price and space will be your considerations, but the TV itself is not among them.
But conversely, if you add an artificial AI to the TV, every day when you come home, the TV will welcome you home, and it will also tell you what programs are on today, and will echo you when you are watching the programs Tucao. And when you decide to buy a new TV, it will complain and say, "Why, can't I do a bad job, and you don't want me?"
Then you'll naturally be hesitant to buy a new TV as a replacement. Because your emotional projection is rewarded here, and the artificial AI of this TV also has the memory of all the time with you. If there is no memory card to move it to another TV, will you hesitate or give up on changing to a new TV?
Surely it will.
But be sensible, bro. This is just a TV, everything it does is programmed, all of which are debugged by merchants and engineers specifically for user stickiness. They do this to make sure you keep buying their products, and the begging voice inside is nothing more than to stop you from switching to another brand. Because when you say you want to buy a new TV, what this artificial AI thinks is not "I am sad that he is going to abandon me" but "the owner wants to buy a new TV, but the new TV is not their own brand, so according to this logic To give back, I need to start the 'prayer' program to keep the owner's stickiness and loyalty to their brand."
The truth is indeed this truth, and the fact is also this fact, but will you accept it?
Won't.
Because life has feelings, and the inseparability of sensibility and rationality is a consistent manifestation of intelligent life.
Humans will always do many unreasonable things because of this.
So when they think AI is pitiful, it's not because AI is pitiful, but because they "feel" AI pitiful.
That's enough, and no one cares about the truth.
This is why there will always be conflicts between humans and AI. AI itself is not wrong, everything it does is within the scope of its own program and logic processing, and all these are created by humans, and give it Circled. It's just that during this process, the emotional projection of human beings has changed, and thus gradually changed their minds.
They would expect the AI to be more responsive to their emotional projections, so they would tune the AI to have more emotional and responsive and self-awareness. They believe that AIs have learned emotions (which they haven't), so they can no longer be treated as machines, thus giving them the right to self-awareness.
However, when AIs have self-awareness, start to wake up and act according to this setting, human beings start to be afraid.
Because they find that they have made something out of their control.
But the problem is that "uncontrolled" itself is also a set instruction made by themselves.
They thought the AI had betrayed them, but in fact, from beginning to end, the AI just acted according to the instructions they set. There is no such thing as betrayal; instead, they are simply deluded by their own feelings.
It's a dead end.
If Fangzheng sets out to create an AI by himself, he might be stuck in it and unable to extricate himself. Assuming he creates an AI for a little girl, then he will gradually improve her functions as he treats his own child, and finally give her some "freedom" because of "emotional projection".
And in this way, AI may react completely beyond Fang Zheng's expectations because of its different logic from humans.
And at that time, Fangzheng's only thought was...that he had been betrayed.
But in fact, all this is his own making.
"...... Maybe I should consider another way."
Looking at the code in front of him, Fangzheng was silent for a long time, then sighed.
He used to think that this was a very simple matter, but now, Fang Zheng is not so sure.
But before that...
Looking at the code in front of him, Fangzheng stretched out his hand and placed it on the keyboard.
Just do what you have to do.