The Flat Earth Society

Flat Earth Discussion Boards => Flat Earth Investigations => Topic started by: Dr David Thork on June 24, 2018, 04:11:32 PM

Title: Google AI
Post by: Dr David Thork on June 24, 2018, 04:11:32 PM
Welp, brand new forum, more flexibility to look at all kinds of conspiracy theories etc so I'll kick things off.

https://www.chess.com/news/view/google-s-alphazero-destroys-stockfish-in-100-game-match

So Google AI has been coming on in leaps and bounds. They basically have a neural network, give the thing the rules of whatever it is doing and let it work out how to do it. In the example above, google told their AI about the rules of chess. And that's it. Just the rules. They didn't teach it any openings, any tactics, they taught it absolutely nothing. 4 hours later, not only could it destroy any human on earth, it also stuffed the world's best chess program in a 100 game match by winning 28, drawing 72 and losing zero games.

One of the games I have seen is very interesting. Google wins as black (harder to do). Twice during the game, white offered a draw by repeating the same move 3 times. In both cases, google's AI decided to weaken its own position by refusing the draw and picking an alternative move. This was completely unexpected by programmers who expected it to only ever want to strengthen its position in a game. Stockfish (the rival machine) ends up resigning about 83 moves in ... note most people didn't even know stockfish could resign. No human has forced it to do that. Also Stockfish calculates 70million moves per second ... google's machine was only looking at 80,000 moves per second. It seems it just understood chess better than anyone has ever done before. Google has only released 10 of the 100 games to the public, but it is already clear it doesn't play chess like any human, or any computer for that matter. It really does have its own way of doing it.

Google have also used the same AI to dominate Dota 2 and a game called Go.

The thing is, I wonder what else google uses this for, and would be interested to see your opinions? For example, tax avoidance would be an obvious choice. The rules are known (the law), let the algorithm find ways of being creatively efficient at avoiding tax and milking subsidies. 
Serving adverts more likely to make you buy things is another obvious choice.
Altering political opinion by pushing certain view points to sway elections is another I can see google doing, using the algorithm to be as persuasive as possible.

Google keep talking about curing cancer and other altruistic things like this ... but I don't see them using this power for good.

What do you think google will do with AI, and how can the public defend themselves against it?
Title: Re: Google AI
Post by: Dr David Thork on June 24, 2018, 04:40:07 PM
I managed to find a commentary of the google win as black that has been released, for those not wanting to look through a list of moves.

https://www.youtube.com/watch?v=0g9SlVdv1PY
Title: Re: Google AI
Post by: Rama Set on June 24, 2018, 09:30:27 PM
Wrong forum?
Title: Re: Google AI
Post by: Dr David Thork on June 24, 2018, 10:59:38 PM
Is it?

My understanding is that you can look at a wider variety of topics in here, but are still subject to upper fora rules. To that end, it is you that made the error with a low content off-topic post.

Its a new thing, we'll see how it plays out. The basis is to have a broader array of topic flexibility, with a stringent set of forum rules to keep things civil and productive. We wanted to encourage more debate, and not just keep doing the same old tired debate about gravity and sunsets in the upper fora. Its an experiment. If it works and we all enjoy it, we'll keep it. If it fails, we'll can it Google Glass style. Read the debate club threads in S&C.

Google is a company pushing earth shaped propaganda, so discussion about its credibility in other areas would be valid ... as far as I understand. So, do you have anything to say about google, AI or chess programs?
Title: Re: Google AI
Post by: Rama Set on June 25, 2018, 12:27:04 PM
Yeah my bad. I thought that it still had to be FE related.

I don’t think the Google AI dominated go, but it certainly won against the world’s top player. Another AI, not developed by Google, defeated a bunch of human’s in a poker game (https://www.wired.com/2017/01/mystery-ai-just-crushed-best-human-players-poker/).  What made this interesting is that unlike chess or go, poker has a component of hidden information and theory of mind components in betting. The AI adapted to both of these elements and learned to call bluffs.
Title: Re: Google AI
Post by: Dr David Thork on June 25, 2018, 12:38:38 PM
Yeah my bad. I thought that it still had to be FE related.

I don’t think the Google AI dominated go, but it certainly won against the world’s top player. Another AI, not developed by Google, defeated a bunch of human’s in a poker game (https://www.wired.com/2017/01/mystery-ai-just-crushed-best-human-players-poker/).  What made this interesting is that unlike chess or go, poker has a component of hidden information and theory of mind components in betting. The AI adapted to both of these elements and learned to call bluffs.
I've actually written poker algorithms before. I wrote one for texas holdem based on the probability of winning any hand based on what's in my hand, whats on the table and what others could conceivably be holding by probability, what I might draw on the river, what they might get etc (odds for each stage). The problem is, whilst it worked and could say "you have a 45% chance of winning this hand over the other 4 people left in", I could never wrap my head around the maths of how much to bid to lure maximum money from people, to stop them folding because my algoritmn was predicting say 97% chance of winning so it went all in etc. I could get it to work out if the pot money vs the next stake was worth it to 'see' my opponent cards, but should my algorithm raise by $4, $6, $7? I had no idea how to sort that. And it seemed how much you bid, is actually more important than the cards you hold ... you hold average cards, you need to stay in but not lose too much.

I have recently been looking at tensor flow (from a hobbiest point of view), and it might be able to solve these problems for me, and as you say, be able to call a bluff ... which I wasn't even close to getting the maths right for. I could only say, odds in my favour, bet, not in my favour, don't bet ... binary ... and that won't beat a top player even if I know the odds. I did have 'the gun' in my probability, and the algorithm I wrote would know where on the table it was and calculate the odds dependant on its seat when asked to bid ... an instantaneous set of odds.

But I won't be using tensorflow for this ... gambling sites are already onto this and now actively hunt down signs of machine learning. That window has passed.  :)


Machine learning would be the ultimate answer to earth's shape. Not even Tom Bishop would argue because it is based on observable science. You don't give ML any assumptions. You just feed it data and it iterates repeatedly until it finds the answer. The problem with ML in today's format, is that whilst we'd end up knowing what shape the earth is, we'd have no idea how the machine came to that conclusion, we'd only know it is right. Much like we have no idea how Google's AI plays chess. It just does it.

I have a theory that ML will actually cause a 'great ignorance'. Lets say you got ML to start predicting the weather. Now if would just look at all the data from ocean buoys, airport reports, temp, pressure, visibility, dew point etc etc. And it would work out the weather ... and it would be far more accurate than anything we have today. Maybe we'd end up with a 30 day forecast. People at the weather service would abandon trying to predict the weather, the machine does it better, but no one knows how it does it. So, you'd have a meteorological office filled with people that could write machine learning, and no one who actually knew how to predict weather themselves from data as no one is employed to do that. The science would grind to a halt. There is no point learning a solved problem, as it is useless, weather isn't predicted that way any more.
Span this through multiple industries such as medical cures, accountancy, logistics, etc ... no one would have any knowledge or skill whatsoever. But I guess this is why people think AI will kill jobs for billions of people. We'll all be dumb, unskilled and unneeded.
Title: Re: Google AI
Post by: Rama Set on June 25, 2018, 01:04:06 PM
Yeah my bad. I thought that it still had to be FE related.

I don’t think the Google AI dominated go, but it certainly won against the world’s top player. Another AI, not developed by Google, defeated a bunch of human’s in a poker game (https://www.wired.com/2017/01/mystery-ai-just-crushed-best-human-players-poker/).  What made this interesting is that unlike chess or go, poker has a component of hidden information and theory of mind components in betting. The AI adapted to both of these elements and learned to call bluffs.
I've actually written poker algorithms before. I wrote one for texas holdem based on the probability of winning any hand based on what's in my hand, whats on the table and what others could conceivably be holding by probability, what I might draw on the river, what they might get etc (odds for each stage). The problem is, whilst it worked and could say "you have a 45% chance of winning this hand over the other 4 people left in", I could never wrap my head around the maths of how much to bid to lure maximum money from people, to stop them folding because my algoritmn was predicting say 97% chance of winning so it went all in etc. I could get it to work out if the pot money vs the next stake was worth it to 'see' my opponent cards, but should my algorithm raise by $4, $6, $7? I had no idea how to sort that. And it seemed how much you bid, is actually more important than the cards you hold ... you hold average cards, you need to stay in but not lose too much.

I have recently been looking at tensor flow (from a hobbiest point of view), and it might be able to solve these problems for me, and as you say, be able to call a bluff ... which I wasn't even close to getting the maths right for. I could only say, odds in my favour, bet, not in my favour, don't bet ... binary ... and that won't beat a top player even if I know the odds. I did have 'the gun' in my probability, and the algorithm I wrote would know where on the table it was and calculate the odds dependant on its seat when asked to bid ... an instantaneous set of odds.

But I won't be using tensorflow for this ... gambling sites are already onto this and now actively hunt down signs of machine learning. That window has passed.  :)


Machine learning would be the ultimate answer to earth's shape. Not even Tom Bishop would argue because it is based on observable science. You don't give ML any assumptions. You just feed it data and it iterates repeatedly until it finds the answer. The problem with ML in today's format, is that whilst we'd end up knowing what shape the earth is, we'd have no idea how the machine came to that conclusion, we'd only know it is right. Much like we have no idea how Google's AI plays chess. It just does it.

There recently was a formal debate between an AI and humans (https://www.theguardian.com/technology/2018/jun/18/artificial-intelligence-ibm-debate-project-debater), and the AI provided substantive data to back up its conclusions.  If you borrowed elements from this you could also get data on how the AI came to the conclusion.
Title: Re: Google AI
Post by: Dr David Thork on June 25, 2018, 01:06:17 PM
There recently was a formal debate between an AI and humans (https://www.theguardian.com/technology/2018/jun/18/artificial-intelligence-ibm-debate-project-debater), and the AI provided substantive data to back up its conclusions.  If you borrowed elements from this you could also get data on how the AI came to the conclusion.
I hadn't seen that. Its an obvious thing to want to know how the answer is what it is. It needs building into all AI.

I'll bet that is how they'll make money. ML programs like tensorflow are free. So you can find the answer to anything you want. But if you want to know how the machine arrived at that answer so you can understand it too, that's the premium module. That's what I'd do.  ;)

I don't look forward to the first AI bot joining this forum to debate earth's shape.  :(
Title: Re: Google AI
Post by: Rama Set on June 25, 2018, 06:22:54 PM
There recently was a formal debate between an AI and humans (https://www.theguardian.com/technology/2018/jun/18/artificial-intelligence-ibm-debate-project-debater), and the AI provided substantive data to back up its conclusions.  If you borrowed elements from this you could also get data on how the AI came to the conclusion.
I hadn't seen that. Its an obvious thing to want to know how the answer is what it is. It needs building into all AI.

I'll bet that is how they'll make money. ML programs like tensorflow are free. So you can find the answer to anything you want. But if you want to know how the machine arrived at that answer so you can understand it too, that's the premium module. That's what I'd do.  ;)

I don't look forward to the first AI bot joining this forum to debate earth's shape.  :(

Tom's been here for years  ??? Jokes aside, AI is one of the things that makes me nervous about the future and not because of Skynet, but because it encourages humans to reduce the agency in their affairs.
Title: Re: Google AI
Post by: douglips on June 26, 2018, 03:30:17 AM

Machine learning would be the ultimate answer to earth's shape. Not even Tom Bishop would argue because it is based on observable science. You don't give ML any assumptions. You just feed it data and it iterates repeatedly until it finds the answer. The problem with ML in today's format, is that whilst we'd end up knowing what shape the earth is, we'd have no idea how the machine came to that conclusion, we'd only know it is right. Much like we have no idea how Google's AI plays chess. It just does it.


Why would you think this? When humans look at the evidence and show proof that the Earth is round, you discount the evidence. When AI comes along, are you going to withhold from it pictures from space? Will you tell it that there are no power lines over Lake Pontchartrain? That the sun doesn't move across the sky at a constant rate of 15 degrees per hour?

I see no reason why flat Earth adherents couldn't either insist on omitting large portions of relevant evidence, or just dismissing outright any conclusions that differ from their preconceived flat Earth model.
Title: Re: Google AI
Post by: Dr David Thork on June 26, 2018, 01:38:31 PM
You'd just need to agree on a dataset. Something everyone knows to be true and can prove themselves. Then let the computer iterate away, extrapolate and turn into a defined shape.
Title: Re: Google AI
Post by: Max_Almond on June 26, 2018, 01:49:01 PM
There recently was a formal debate between an AI and humans (https://www.theguardian.com/technology/2018/jun/18/artificial-intelligence-ibm-debate-project-debater), and the AI provided substantive data to back up its conclusions.  If you borrowed elements from this you could also get data on how the AI came to the conclusion.

Substantive data, you say?

Well that proves that -

Nah, no need to finish that sentence. ;)
Title: Re: Google AI
Post by: Curious Squirrel on June 26, 2018, 01:53:41 PM
You'd just need to agree on a dataset. Something everyone knows to be true and can prove themselves. Then let the computer iterate away, extrapolate and turn into a defined shape.
Seeing as we can't even agree on that here on the forums, this seems one of those 'easier said than done' things to me. It would be fascinating to me to see what one might turn out like/into if it was just set loose on these forums for a while though. What conclusions it might come to with as little in the way of biases as possible.
Title: Re: Google AI
Post by: totallackey on June 26, 2018, 02:00:11 PM
This simply cements the reality this forum and the other FE site are mostly AI bots.
Title: Re: Google AI
Post by: Rama Set on June 26, 2018, 03:05:24 PM
There recently was a formal debate between an AI and humans (https://www.theguardian.com/technology/2018/jun/18/artificial-intelligence-ibm-debate-project-debater), and the AI provided substantive data to back up its conclusions.  If you borrowed elements from this you could also get data on how the AI came to the conclusion.

Substantive data, you say?

Well that proves that -

Nah, no need to finish that sentence. ;)

I wasn't trying to prove anything. What do you mean?
Title: Re: Google AI
Post by: Round Eyes on June 26, 2018, 04:10:45 PM
You'd just need to agree on a dataset. Something everyone knows to be true and can prove themselves. Then let the computer iterate away, extrapolate and turn into a defined shape.

should be possible to input all know flights into this system and it could then draw a true map based on all the triangulation based on the data set??
Title: Re: Google AI
Post by: Dr David Thork on June 26, 2018, 08:15:38 PM
You'd just need to agree on a dataset. Something everyone knows to be true and can prove themselves. Then let the computer iterate away, extrapolate and turn into a defined shape.

should be possible to input all know flights into this system and it could then draw a true map based on all the triangulation based on the data set??
No, aircraft don't fly direct. They go via way points, beacons, they go round in circles in holding patterns, wind effects them, temp, pressure, number of passengers and fuel also effects cruising speed, as does the centre of gravity which is load dependant (Where the passengers sit). This would be a horrible way to work it out.
Title: Re: Google AI
Post by: Rama Set on June 26, 2018, 09:09:24 PM
You'd just need to agree on a dataset. Something everyone knows to be true and can prove themselves. Then let the computer iterate away, extrapolate and turn into a defined shape.

should be possible to input all know flights into this system and it could then draw a true map based on all the triangulation based on the data set??
No, aircraft don't fly direct. They go via way points, beacons, they go round in circles in holding patterns, wind effects them, temp, pressure, number of passengers and fuel also effects cruising speed, as does the centre of gravity which is load dependant (Where the passengers sit). This would be a horrible way to work it out.

If the AI had all that data, it should eventually be able to model the space the aircraft fly in. With all those variables it seems like it would be pretty definitive too as there are likely very few solutions that are intelligible -and- fit the data.
Title: Re: Google AI
Post by: douglips on June 27, 2018, 05:38:09 AM
You'd just need to agree on a dataset. Something everyone knows to be true and can prove themselves. Then let the computer iterate away, extrapolate and turn into a defined shape.

Humans did that, starting 2000 years ago. Why do you think that an AI would come to a different conclusion in, what, a month, than humans have come to in things of years? Computers are good at solving problems faster than humans, but when humans have a multi millennia head start, how could it come to a different conclusion?

You will need to fight tooth and nail to exclude all the evidence you wish to discount in our human discussions from consideration by the AI. It's the same exact problem this forum had been hashing out. AI isn't magic, it can't possibly solve this problem any better than humans already have.
Title: Re: Google AI
Post by: Tumeni on June 27, 2018, 01:53:39 PM
... aircraft don't fly direct. They go via way points, beacons, they go round in circles in holding patterns, wind effects them, temp, pressure, number of passengers and fuel also effects cruising speed, as does the centre of gravity which is load dependant (Where the passengers sit).

(slightly off-topic, I know, but ...)

.... why don't you present this argument to anyone who claims the ISS is really a high-altitude plane? For all of this is sure proof that the ISS, with its regular flight path, absolutely-perfect orbit timing, etc. CANNOT be a plane.
Title: Re: Google AI
Post by: Dr David Thork on June 27, 2018, 06:49:48 PM
... aircraft don't fly direct. They go via way points, beacons, they go round in circles in holding patterns, wind effects them, temp, pressure, number of passengers and fuel also effects cruising speed, as does the centre of gravity which is load dependant (Where the passengers sit).

(slightly off-topic, I know, but ...)

.... why don't you present this argument to anyone who claims the ISS is really a high-altitude plane? For all of this is sure proof that the ISS, with its regular flight path, absolutely-perfect orbit timing, etc. CANNOT be a plane.
It cant be a passenger plane like a 737. But high altitude military planes don't take off from commercial airports, they don't follow departure routes or arrivals plates, they aren't made to wait in holds. When you see a jumbojet full of passengers leave an airport like this ...

https://www.youtube.com/watch?v=lsr_WFmhHow

then come back to me.
Title: Re: Google AI
Post by: Tumeni on June 27, 2018, 07:11:53 PM
When you have some proof that the ISS is any kind of military plane, then come back to me.

From my personal observations, it cannot be a plane. Planes don't behave like that.
Title: Re: Google AI
Post by: Dr David Thork on June 27, 2018, 08:01:04 PM
When you have some proof that the ISS is any kind of military plane, then come back to me.

From my personal observations, it cannot be a plane. Planes don't behave like that.
How do you know how secret military planes behave? No one knew how the SR71 behaved until AFTER it had been decommissioned. Same with the hope diamond.

https://www.youtube.com/watch?v=cUf9xrn6SeM
Title: Re: Google AI
Post by: Tumeni on June 29, 2018, 10:14:07 AM
When you have some proof that the ISS is any kind of military plane, then come back to me.

From my personal observations, it cannot be a plane. Planes don't behave like that.
How do you know how secret military planes behave?

I've seen the ISS, on more than one occasion, cross my sky twice in the same evening. It did so exactly as predicted for me by in-the-sky.org, it went into Earth's shadow as predicted by the app, and went in the same direction each time. It shows no deviation from wind, weather or such, it shows no vapour, propellant or other trail behind it.

I know that planes cannot cross my sky at one time, then cross my sky again in the same direction, 90 mins later, without doing one of two things
A- going around our planet in 90mins at approx 17k mph (Mach 16+), or
B- changing direction to go back to the starting point in my sky.

They're not putting on a display specifically for me, they don't know where I am when observing, so we have to rule out option B.   Else, how would they know when to turn?

Also, other observers are seeing the ISS go in the same direction, nobody sees it change direction, nobody sees it go East - West, so we can further rule out B from that.

Again, how would the pilot know when to turn, such that they were out of sight for the observer / all observers, and able to turn back to the starting point?

The only sensible option is A.

You're just speculating - maybe a plane could do this, maybe that. Have you any proof that a plane is actually doing this?
Title: Re: Google AI
Post by: Dr David Thork on June 29, 2018, 02:10:57 PM
I've seen the ISS, on more than one occasion, cross my sky twice in the same evening.
Who says you saw the same vehicle? You saw TWO instances of a machine pass overhead. Why do you make the immediate inference they must be the same machine? If I saw a train go past me on the platform at my local train station, and then another identical one go past 10 mins later in the same direction, I wouldn't leap to the conclusion it was the same train. I'd assume it was two trains run by the same company with the same paint job on them.

It did so exactly as predicted for me by in-the-sky.org,
My regional train company does that with trains.

it went into Earth's shadow as predicted by the app, and went in the same direction each time. It shows no deviation from wind, weather or such, it shows no vapour, propellant or other trail behind it.
Again, how do you know what type of propulsion secret military aircraft use?

I know that planes cannot cross my sky at one time, then cross my sky again in the same direction, 90 mins later, without doing one of two things
A- going around our planet in 90mins at approx 17k mph (Mach 16+), or
B- changing direction to go back to the starting point in my sky.
Or C, its not the exact same aircraft.

They're not putting on a display specifically for me, they don't know where I am when observing, so we have to rule out option B.   Else, how would they know when to turn?
Why would they turn? Fly straight, get from edge to edge. Fly a different route home, pretending to be any other 'satellite' on a different 'orbit' (route) going a different direction.

Also, other observers are seeing the ISS go in the same direction, nobody sees it change direction, nobody sees it go East - West, so we can further rule out B from that.
How do you even know what you are seeing? If I projected a hologram onto a glass-like firmament, you'd see whatever shape I decided to put there. This is even easier than multiple vehicles. I just point a powerful light source at the firmament and you'll see whatever I show you. And I can turn the light off whenever I want a 'shadow of the earth'.

Again, how would the pilot know when to turn, such that they were out of sight for the observer / all observers, and able to turn back to the starting point?
Pilot? We live in an age of UAVs. Why the hell would I want a pilot? He needs life support, oxygen, warmth, pressurisation, instrumentation, knobs and dials, ejection facilities ... he's a pain in the backside. I'm going to replace him with a pentium processor.

The only sensible option is A.
Or C ... multiple UAVs ... or better yet, D ... holographic projection.

You're just speculating - maybe a plane could do this, maybe that. Have you any proof that a plane is actually doing this?
You're just speculating that you are being told the truth.
Title: Re: Google AI
Post by: Curious Squirrel on June 29, 2018, 02:58:44 PM
You're just speculating - maybe a plane could do this, maybe that. Have you any proof that a plane is actually doing this?
You're just speculating that you are being told the truth.
The information given concurs with the information seen/experienced personally. So either A. They're executing the deception so perfectly, that one can't find a flaw in it. Or B. They're actually doing it.

Granted, whichever side you choose to go with one can certainly state/claim it's entirely speculation. But at least my personal experience agrees with the information presented. At that point, what reason is there to be seriously in favor of option A? Other than deciding that they can't be trusted for any reason whatsoever, or similar ideology I suppose. Actually curious here. Disregarding anything else, if all of a persons experiences with said object exactly reflect as much of the publicly given information as can be personally verified (arc speed across the sky, shape, timing) then what logical reason is there to be in serious doubt of the rest?
Title: Re: Google AI
Post by: Dr David Thork on June 29, 2018, 03:18:00 PM
About 6 months ago, I went to the mall. You'll never guess who was there. Santa!
Let's examine the evidence presented.

I saw him with my own eyes.
He was giving out presents.
The mall was advertising meetings with Santa.
He had a sleigh.
There were elves.

Are you trying to tell me Santa was in on it, all the elves were in on it, the parents of all the children were in on it, the mall owners were in on it and Santa's sponsor Vodaphone was in on it too? All just to fool me and the children? What possible motivation could there be for this?

Let's follow your line of thinking ...

The information given concurs with the information seen/experienced personally.
Saw him with my own eyes. Red suit, beard, giving out presents. Check.

So either A. They're executing the deception so perfectly, that one can't find a flaw in it. Or B. They're actually doing it.
Agreed. A or B.

Granted, whichever side you choose to go with one can certainly state/claim it's entirely speculation.
Erm, ok. I can only speculate it isn't Santa. You're losing me here, but OK.

But at least my personal experience agrees with the information presented. At that point, what reason is there to be seriously in favor of option A?
You tell me. Why do parents lie to children, mall workers lie to children, sponsors lie to children, and people dressed up like idiots lie to children? More interesting question, if you didn't know their exact motivation for doing this ... does it make it any more probable that I met Santa? And are the motivations of all the actors (parents and vodaphone for example) the same? Why would they collude against me and the children?

Other than deciding that they can't be trusted for any reason whatsoever, or similar ideology I suppose.
So children should trust their parents, trust the mall, trust the sponsors and trust the idiots in the green and red suits ... just like you trust the government.

Actually curious here. Disregarding anything else, if all of a persons experiences with said object exactly reflect as much of the publicly given information as can be personally verified then what logical reason is there to be in serious doubt of the rest?
Children across the world get to meet Santa in a mall ... they all experience it ... it has huge publicity ... talk of Santa happens on TV, radio, books, its part of the zeitgiest.

What logical reason should I doubt the existence of Santa? Or am I just another conspiracy theory nut?
Title: Re: Google AI
Post by: Curious Squirrel on June 29, 2018, 04:16:04 PM
About 6 months ago, I went to the mall. You'll never guess who was there. Santa!
Let's examine the evidence presented.

I saw him with my own eyes.
He was giving out presents.
The mall was advertising meetings with Santa.
He had a sleigh.
There were elves.

Are you trying to tell me Santa was in on it, all the elves were in on it, the parents of all the children were in on it, the mall owners were in on it and Santa's sponsor Vodaphone was in on it too? All just to fool me and the children? What possible motivation could there be for this?

Let's follow your line of thinking ...

The information given concurs with the information seen/experienced personally.
Saw him with my own eyes. Red suit, beard, giving out presents. Check.

So either A. They're executing the deception so perfectly, that one can't find a flaw in it. Or B. They're actually doing it.
Agreed. A or B.

Granted, whichever side you choose to go with one can certainly state/claim it's entirely speculation.
Erm, ok. I can only speculate it isn't Santa. You're losing me here, but OK.

But at least my personal experience agrees with the information presented. At that point, what reason is there to be seriously in favor of option A?
You tell me. Why do parents lie to children, mall workers lie to children, sponsors lie to children, and people dressed up like idiots lie to children? More interesting question, if you didn't know their exact motivation for doing this ... does it make it any more probable that I met Santa? And are the motivations of all the actors (parents and vodaphone for example) the same? Why would they collude against me and the children?

Other than deciding that they can't be trusted for any reason whatsoever, or similar ideology I suppose.
So children should trust their parents, trust the mall, trust the sponsors and trust the idiots in the green and red suits ... just like you trust the government.

Actually curious here. Disregarding anything else, if all of a persons experiences with said object exactly reflect as much of the publicly given information as can be personally verified then what logical reason is there to be in serious doubt of the rest?
Children across the world get to meet Santa in a mall ... they all experience it ... it has huge publicity ... talk of Santa happens on TV, radio, books, its part of the zeitgiest.

What logical reason should I doubt the existence of Santa? Or am I just another conspiracy theory nut?
Yet you're missing an important piece of this one that you CAN personally experience/test (or not experience as the case may be). Presents appearing under the tree with no one having purchased them, labeled as 'From: Santa'. You've made a poor corollary because of it. Now, do you have an actual personal experience or test that can be done that sheds doubt upon the story of the ISS? Or just poorly crafted attempts to make my reasoning look 'bad' for some reason instead of answering the question?
Title: Re: Google AI
Post by: Dr David Thork on June 29, 2018, 04:53:34 PM
Yet you're missing an important piece of this one that you CAN personally experience/test (or not experience as the case may be). Presents appearing under the tree with no one having purchased them, labeled as 'From: Santa'.
And this happens. Ask any 4 year old. That's the point of the Santa thought experiment.

Assume you are 4 years old. You know only what other 4 year olds know. You aren't privy to adult information (analogous to government info). There are people who want to lie to you. In fact, your own parents are doing it. And they do it for two reasons.

1) Their own perverse pleasure because they think your ignorance is adorable ... I'm guessing governments can empathise with this
2) Because they think it makes your life more enjoyable not to know the truth and they know better than you ... I'm guessing governments can empathise with this too.

Now, what happens when you learn the truth? Do you get as many xmas presents as you get older? Do you get taken to get free gifts in the mall? Do your parents continue to make as much effort for xmas? Do you benefit from knowing the truth, or are you actually punished for it? What does a government do when you find out the truth about something ... do they reward you?

The Santa thought experiment is perfect because you've been red-pilled. You are the other side of the conspiracy. You've seen both sides. Do you go around telling small children that Santa doesn't exist ... or do you become complicit in the lie? And would you prefer to be 40 years old and still believe in Santa? It means more presents. There is no upside to knowing. The mentally handicapped still get trips to sit on Santa's knee. They still get those presents. It only stops once you know the truth.

So, how did you break free of the Santa delusion? Maybe an older sibling told you? Maybe a friend at school? And did you believe the first person who told you? It is doubtful. But after a while you became suspicious. You dug more and more ... and eventually, despite almost everyone telling you Santa exists ... those little voices of doubt ate away until you put enough of the pieces together and chose to no longer believe the fantastical stories you grew up with. Welcome to TFES. We're your older sibling. A man delivering presents all over the world on a sleigh is every bit as ridiculous as a man in a tin foil space ship walking on the moon, or a machine travelling at 22,000 mph full of scientists.


Or just poorly crafted attempts to make my reasoning look 'bad' for some reason instead of answering the question?
There is no point in me whispering "Santa doesn't exist" if you are 4 years old. It will only make you cry. But as you become more sceptical, say aged maybe 7 or 8 ... then you are ready to listen. Right now you are behaving like a 4 year old. Stop crying and start investigating.
Title: Re: Google AI
Post by: douglips on June 29, 2018, 08:56:42 PM

How do you even know what you are seeing? If I projected a hologram onto a glass-like firmament, you'd see whatever shape I decided to put there. This is even easier than multiple vehicles. I just point a powerful light source at the firmament and you'll see whatever I show you. And I can turn the light off whenever I want a 'shadow of the earth'.


I'm curious how tall you are willing to build this tower of ad hoc fallacies.

How can you use lights to project a shadow on the face of the moon?

http://www.amateurastrophotography.com/how-to-see-the-iss-transit/4593536074
Title: Re: Google AI
Post by: Dr David Thork on June 29, 2018, 09:42:56 PM
Very possible.

Quote from: http://www.printmag.com/article/moonstruck/
I was reminded of this ten years ago when articles started coming out about how a Coca-Cola executive named Steve Koonin had conceived a plan to use NASA laser technology to shoot colored beams into space in order to form the Coke logo on the lunar surface just in time for the Times Square Ball to drop. Shot down by the FAA, who pointed out that the lasers just might cut airplanes in half, Koonin reluctantly shelved the idea.

It is called moonvertising and it is illegal. But look who has the technology .... Not just any old laser company.


How is this
(http://wwwcdn.printmag.com/wp-content/uploads/3.-rolling-rock.jpg)
harder than
(http://www.amateurastrophotography.com/communities/8/004/013/518/158//images/4631190500_476x351.jpg)

That tech is 20 years old. Coke were trying in 1999 to ping lasers off the moon.

Title: Re: Google AI
Post by: Curious Squirrel on June 29, 2018, 10:32:58 PM
Yet you're missing an important piece of this one that you CAN personally experience/test (or not experience as the case may be). Presents appearing under the tree with no one having purchased them, labeled as 'From: Santa'.
And this happens. Ask any 4 year old. That's the point of the Santa thought experiment.

Assume you are 4 years old. You know only what other 4 year olds know. You aren't privy to adult information (analogous to government info). There are people who want to lie to you. In fact, your own parents are doing it. And they do it for two reasons.

1) Their own perverse pleasure because they think your ignorance is adorable ... I'm guessing governments can empathise with this
2) Because they think it makes your life more enjoyable not to know the truth and they know better than you ... I'm guessing governments can empathise with this too.

Now, what happens when you learn the truth? Do you get as many xmas presents as you get older? Do you get taken to get free gifts in the mall? Do your parents continue to make as much effort for xmas? Do you benefit from knowing the truth, or are you actually punished for it? What does a government do when you find out the truth about something ... do they reward you?

The Santa thought experiment is perfect because you've been red-pilled. You are the other side of the conspiracy. You've seen both sides. Do you go around telling small children that Santa doesn't exist ... or do you become complicit in the lie? And would you prefer to be 40 years old and still believe in Santa? It means more presents. There is no upside to knowing. The mentally handicapped still get trips to sit on Santa's knee. They still get those presents. It only stops once you know the truth.

So, how did you break free of the Santa delusion? Maybe an older sibling told you? Maybe a friend at school? And did you believe the first person who told you? It is doubtful. But after a while you became suspicious. You dug more and more ... and eventually, despite almost everyone telling you Santa exists ... those little voices of doubt ate away until you put enough of the pieces together and chose to no longer believe the fantastical stories you grew up with. Welcome to TFES. We're your older sibling. A man delivering presents all over the world on a sleigh is every bit as ridiculous as a man in a tin foil space ship walking on the moon, or a machine travelling at 22,000 mph full of scientists.


Or just poorly crafted attempts to make my reasoning look 'bad' for some reason instead of answering the question?
There is no point in me whispering "Santa doesn't exist" if you are 4 years old. It will only make you cry. But as you become more sceptical, say aged maybe 7 or 8 ... then you are ready to listen. Right now you are behaving like a 4 year old. Stop crying and start investigating.
Yet you haven't presented a single bit of personally verifiable evidence to check on for this. I've done my investigating. I've explored the options. You don't have facts and images of any sort on your side (that I have ever seen) all you have are ad hoc ideas on how something *could* maybe work. We've brought up the solar power plane before. Why do any of us know about this if it's being used to fake the ISS? From where I sit, none. You have yet to prevent a valid reason beyond, essentially, 'it's to strengthen the conspiraceh!' To be honest the condescension involved in your tale doesn't help your case either. But I doubt either of us is going to get anywhere, it's just a bit of a shame you have little more to offer past the tired old 'look around and see!' that's been the refrain on these things for years. If I hadn't would I be attempting to discuss such a thing on such an out of the way place as the FE forums for so long? Doubtful. But I suppose that's neither here nor there.
Title: Re: Google AI
Post by: douglips on June 30, 2018, 12:15:42 AM
Very possible.

Quote from: http://www.printmag.com/article/moonstruck/
I was reminded of this ten years ago when articles started coming out about how a Coca-Cola executive named Steve Koonin had conceived a plan to use NASA laser technology to shoot colored beams into space in order to form the Coke logo on the lunar surface just in time for the Times Square Ball to drop. Shot down by the FAA, who pointed out that the lasers just might cut airplanes in half, Koonin reluctantly shelved the idea.

It is called moonvertising and it is illegal. But look who has the technology .... Not just any old laser company.


How is this
(http://wwwcdn.printmag.com/wp-content/uploads/3.-rolling-rock.jpg)
harder than
(http://www.amateurastrophotography.com/communities/8/004/013/518/158//images/4631190500_476x351.jpg)

That tech is 20 years old. Coke were trying in 1999 to ping lasers off the moon.

How it's harder is there is no such thing as a black laser. I know there is a thing called a blacklight but you know that doesn't actually shine darkness, right? What technology are you aware of that shines a beam of darkness?

Title: Re: Google AI
Post by: Dr David Thork on June 30, 2018, 01:21:13 AM
How do you know they aren't drawing the rest of the moon and leaving a little dark patch?

https://www.youtube.com/watch?v=4CCJxF7nIcM

Again you are also crying at me like a 4 year old. Use google, investigate the possibilities yourself. Stop asking me but what about this, then what about this, then what about this. This forum board is not for me to answer flat earth questions. And this thread was supposed to be a discourse on AI before it got perverted by people complaining about earth's shape again.
Title: Re: Google AI
Post by: douglips on June 30, 2018, 06:52:10 AM
I do apologise, you're right - I did forget the forum this is in

On the original topic, I maintain that it will be impossible for AI to solve this problem, because you will either restrict the input to the AI to the point it's conclusions are useless, or it will come to the same conclusion that humans have come to after examining the same evidence.
Title: Re: Google AI
Post by: Tumeni on June 30, 2018, 10:26:19 AM
How do you know they aren't drawing the rest of the moon and leaving a little dark patch?

... because there's nothing to draw on, if that's the case. Come on, you've just suggested that the ISS was projected ONTO something, now you're suggesting the ISS is a gap in the projection. If that's the case, the projection has to be onto a Moon-sized object, else you cannot project.


Use google, investigate the possibilities yourself. Stop asking me but what about this

... but people are only responding to your theories which have no basis in fact, and which don't actually fit AT ALL with the personal observations of those discussing them with you

, then what about this, then what about this. This forum board is not for me to answer flat earth questions. And this thread was supposed to be a discourse on AI before it got perverted by people complaining about earth's shape again.

This forum is for discussion.

You're the one who suggested that a plane would be apparent by its responses to wind and weather, etc., but persists in the suggestion that the ISS could be plane ... again, does not fit with the observations or data. And I DID say I knew the ISS was slightly off-topic when I replied at #19.....

New thread coming, then, since you insist this one remain on AI.
Title: Re: Google AI
Post by: Round Eyes on July 06, 2018, 01:25:04 AM
... aircraft don't fly direct. They go via way points, beacons, they go round in circles in holding patterns, wind effects them, temp, pressure, number of passengers and fuel also effects cruising speed, as does the centre of gravity which is load dependant (Where the passengers sit).

(slightly off-topic, I know, but ...)

.... why don't you present this argument to anyone who claims the ISS is really a high-altitude plane? For all of this is sure proof that the ISS, with its regular flight path, absolutely-perfect orbit timing, etc. CANNOT be a plane.
You can't see the difference in high altitude planes with sole purpose to provide GPS data, and planes used to transport passengers.  You lost that debate fair and square, move on now