A couple CEOs (of an electric car company and a bank) recently announced things like "There's an 80% chance that we're living in a computer simulation." I'm not even going to quote the story correctly or read it because it's nonsense! These are non-scientists making these claims, so it's no surprise that the claims are pseudoscience at best.
The problem with this type of claim is that it's not based on any evidence, but rather on the logically unsound conclusion that if something observed has similarities with something known, then the observed thing must be the same as the known thing. Similarly, atoms can be modelled as spheres around other spheres, just like star systems and galaxies etc. Therefore there is an 80% chance that star systems are the atoms of some larger world, because they're similar. A turtle's back looks a bit similar to the Earth, therefore there's an 80% chance that we live on a giant turtle. One of the only creative things we know of is humankind, therefore it must be that everything we don't understand the origin of has been created by a being that looks like us (ie. that we were created in its image). And: measurements of the universe have some quantum properties, similar to computers (which we understand more), therefore the entire universe must be a computer. The computer simulation hypothesis is not so different from the god delusion or from the plethora of crackpot theories that link any marginally similar phenomena ("my EtherParticles theory explains gravity because gravity restricts movement away from mass, just like trying to move through a dense soup of particles in the ether is predicted by me to restrict movement," etc. etc. etc. etc.)
If the universe is so certainly a computer simulation in some extra-universal world, then what is that world? Why would that world exist "in reality" if ours is so certainly simulated? Wouldn't it mean that it's nearly certain that that world is also a simulation in another world, and so on ad absurdum? And where is the evidence of any of that? There is exactly as much evidence of a turtle that the universe sits upon, as there is of a computer running us.
We must be careful to speak of what the evidence says, and not confuse that with what we imagine it to mean. Extra-universal turtles, universe simulators, alternate realities where the laws of physics are anything we can imagine, are all flights of fantasy. If you have a fantastic idea, and want to speak of it being real, you must find a way to test it. If a test tells you that the universe is similar to a computer simulation, that doesn't mean it is one. You must show that it can't be anything other than a simulation, if you want to be certain that it is. And, "I can't imagine anything else it could be," is not nearly adequate reasoning. In science, unknowns stay as unknowns until there is testable theory to say otherwise. Ruling out everything but what we think we understand, is unscientific and outdated by a few centuries.
What test has been proposed by these CEOs, that could indicate that the universe is a simulation? How do those tests rule out that it could be anything else?
Tuesday, September 20, 2016
Wednesday, January 20, 2016
Cheating on the Turing Test
Continuing from an earlier post...
If a system merely mimics a human, but does so consistently, it may be called intelligent, because it demonstrates intelligent behaviour. You don't need to crack it open and see if it's actually really intelligent or just behaving so, just as we can't crack open a human to see if it's really intelligent or just behaving so.
Suppose that you have a machine with a human inside, and all the machine does is copy the human's behaviour. It behaves as a human, intelligently. The machine without the human is not intelligent, but the whole system is. For example, an old telephone with a human on one end can pass the Turing test, but the telephone on its own can't.
What happens if you have a machine that brainlessly copies or transmits a human's behaviour, but is first separated from the human before it demonstrates that behaviour? Such a thing might not pass a Turing test, but it might be made to behave as a human for as long as necessary, and could be made without intelligence at all, just a behaviour copier.
That would be a poor demonstration of artificial intelligence, and I think it's similar to what today's Turing test candidates are doing. The best Turing candidates that I'm aware of essentially access huge databases of existing human responses, and derive their responses from that. It would be like a machine with thousands of humans in it, brainlessly selecting from the humans' responses. Of course, to do that with AI it needs to be at least clever or sophisticated. But still, the behaviours the AI is demonstrating were copied from a human. They're human behaviours, with the human separated from the copying machine.
Therefore it would be pointless to say such a machine reliably acted as a human. It merely transmitted the actions of humans. I do not think that beating the Turing test that way has anything to do with machine intelligence.
On the other hand, whatever argument can be made against such a machine, can probably be made against a human. Humans are literally human-copying machines, and there's no way to say that it's impossible for a human to go through life without an original thought. One might be able to merely copy what has already been done. If one complains of a machine, "that's not enough to demonstrate true intelligence", the same can be said of a human.
If a system merely mimics a human, but does so consistently, it may be called intelligent, because it demonstrates intelligent behaviour. You don't need to crack it open and see if it's actually really intelligent or just behaving so, just as we can't crack open a human to see if it's really intelligent or just behaving so.
Suppose that you have a machine with a human inside, and all the machine does is copy the human's behaviour. It behaves as a human, intelligently. The machine without the human is not intelligent, but the whole system is. For example, an old telephone with a human on one end can pass the Turing test, but the telephone on its own can't.
What happens if you have a machine that brainlessly copies or transmits a human's behaviour, but is first separated from the human before it demonstrates that behaviour? Such a thing might not pass a Turing test, but it might be made to behave as a human for as long as necessary, and could be made without intelligence at all, just a behaviour copier.
That would be a poor demonstration of artificial intelligence, and I think it's similar to what today's Turing test candidates are doing. The best Turing candidates that I'm aware of essentially access huge databases of existing human responses, and derive their responses from that. It would be like a machine with thousands of humans in it, brainlessly selecting from the humans' responses. Of course, to do that with AI it needs to be at least clever or sophisticated. But still, the behaviours the AI is demonstrating were copied from a human. They're human behaviours, with the human separated from the copying machine.
Therefore it would be pointless to say such a machine reliably acted as a human. It merely transmitted the actions of humans. I do not think that beating the Turing test that way has anything to do with machine intelligence.
On the other hand, whatever argument can be made against such a machine, can probably be made against a human. Humans are literally human-copying machines, and there's no way to say that it's impossible for a human to go through life without an original thought. One might be able to merely copy what has already been done. If one complains of a machine, "that's not enough to demonstrate true intelligence", the same can be said of a human.
Sunday, January 10, 2016
Conjecture: Atoms are not entities
Edit, 2.5 months later: Sometimes I don't care if I sound like a crackpot, other times I read what I wrote and cringe. I'm like a split-personality of crackpot and anti-crackpot... the latter says that writing like in this post can be fairly useless because too much of it is vague and over-general to the point that it does not effectively communicate an idea. It merely presents an idea and then rambles around it.
tl;dr: I disagree with the statements "Matter is made up of particles; matter is made up of waves." Instead I think "Matter has properties of particles and properties of waves." Same goes for light. The distinction is 1) That doesn't mean it is waves and particles, and 2) It need not have those properties all the time, in every way meaningful.
I still like the idea but the following post is content-free.
As of today, I do not believe in the existence of atoms apart from their measured properties. Specifically, I think that matter will exhibit particular properties when measured on a quantum scale, but not otherwise.
To avoid this degenerating into a purely philosophical idea, such as "nothing exists when it is not measured to exist", which probably can't be falsified, I'll qualify the idea. I think that matter can be measured to behave not as particles in certain cases, such as in macroscopic observations (everyday human interaction with most matter) [edit: this is an example of a uselessly vague idea. The macroscopic behavior of large bits of matter is consistent with it being made of particles, and there's no point to asking "yeah but what if it's not?", and no test, at least none that I've identified], interaction with light as a wave (a glass lens bends light as though it has smooth homogeneous surfaces rather than individual particles), and the behaviour of Bose-Einstein condensates (the "particles" of the matter seem to take up the entire space of the matter, and are I think not distinguishable from each other as particles).
I think that the mainstream view of this would be that matter exists as particles, that it always is made up of particles, and that those particles exhibit different behaviours depending on how they're observed. My view is that the particles are emergent and only show up as a consequence of the measurement, and are not actually there otherwise.
I've long figured this is true for light, that it isn't made up of particles, but merely is quantified when measured. It doesn't "exist both as a wave and a particle"---its existence is best described in terms of conserved quantities, stuff that's always there no matter how you measure it, such as its energy; wavelike and particle-like nature is not conserved---it merely has measurable particular properties specific to certain measurements. For example, when measuring "where" some quantity of light energy is, it will be quantified into individual particular locations, but that doesn't make it necessary that the energy moved as those photons between places where it is measured, and certainly not that "it moves as a particle through both slits of a double-slit experiment at the same time," which is something that is not measured and is true only if the particle-like nature of light is persistent and not emergent from measurement. I believe the particle nature of light is not persistent between measurement, and I now believe the same is true of matter.
I don't know enough to make any claims, but I think that this alternative view could be made compatible with mainstream quantum mechanics, and might let other sciences more easily harmonize with quantum mechanics if they were forced to adopt it. Roughly, any 'weirdness' of quantum mechanics is not due to inherent properties of things and reality, but just quirks of how reality may be measured [edit: this is an example of over-generalizing an idea to justify a belief. The belief does not follow logically, it's just what I want the idea to mean]. If the particle nature of matter displays weird properties when measured one way vs. another, such nature and weirdness are not aspects of the matter independent of the measurements.
tl;dr: I disagree with the statements "Matter is made up of particles; matter is made up of waves." Instead I think "Matter has properties of particles and properties of waves." Same goes for light. The distinction is 1) That doesn't mean it is waves and particles, and 2) It need not have those properties all the time, in every way meaningful.
I still like the idea but the following post is content-free.
As of today, I do not believe in the existence of atoms apart from their measured properties. Specifically, I think that matter will exhibit particular properties when measured on a quantum scale, but not otherwise.
To avoid this degenerating into a purely philosophical idea, such as "nothing exists when it is not measured to exist", which probably can't be falsified, I'll qualify the idea. I think that matter can be measured to behave not as particles in certain cases, such as in macroscopic observations (everyday human interaction with most matter) [edit: this is an example of a uselessly vague idea. The macroscopic behavior of large bits of matter is consistent with it being made of particles, and there's no point to asking "yeah but what if it's not?", and no test, at least none that I've identified], interaction with light as a wave (a glass lens bends light as though it has smooth homogeneous surfaces rather than individual particles), and the behaviour of Bose-Einstein condensates (the "particles" of the matter seem to take up the entire space of the matter, and are I think not distinguishable from each other as particles).
I think that the mainstream view of this would be that matter exists as particles, that it always is made up of particles, and that those particles exhibit different behaviours depending on how they're observed. My view is that the particles are emergent and only show up as a consequence of the measurement, and are not actually there otherwise.
I've long figured this is true for light, that it isn't made up of particles, but merely is quantified when measured. It doesn't "exist both as a wave and a particle"---its existence is best described in terms of conserved quantities, stuff that's always there no matter how you measure it, such as its energy; wavelike and particle-like nature is not conserved---it merely has measurable particular properties specific to certain measurements. For example, when measuring "where" some quantity of light energy is, it will be quantified into individual particular locations, but that doesn't make it necessary that the energy moved as those photons between places where it is measured, and certainly not that "it moves as a particle through both slits of a double-slit experiment at the same time," which is something that is not measured and is true only if the particle-like nature of light is persistent and not emergent from measurement. I believe the particle nature of light is not persistent between measurement, and I now believe the same is true of matter.
I don't know enough to make any claims, but I think that this alternative view could be made compatible with mainstream quantum mechanics, and might let other sciences more easily harmonize with quantum mechanics if they were forced to adopt it. Roughly, any 'weirdness' of quantum mechanics is not due to inherent properties of things and reality, but just quirks of how reality may be measured [edit: this is an example of over-generalizing an idea to justify a belief. The belief does not follow logically, it's just what I want the idea to mean]. If the particle nature of matter displays weird properties when measured one way vs. another, such nature and weirdness are not aspects of the matter independent of the measurements.
Friday, June 26, 2015
Interstellar Is a Terrible Movie. Matthew McConaughey is terrible.
Famous physicist Kip Thorne is a producer on Interstellar, and worked to ensure that nothing in the film disobeys accepted laws of science, and among other things that the black hole visuals were based on sciencey equations. That's great, but there were not nearly enough awe-inspiring scenes of science to save the film.
Kip Thorne probably knows more about relativity than I know about anything. In his book The Science of Interstellar, he writes
SPOILER WARNING...
That's what made the film not great. Here are some of the things that made it bad:
Kip Thorne probably knows more about relativity than I know about anything. In his book The Science of Interstellar, he writes
I suggested [...] two guidelines for the science of Interstellar:However, there is a great divide between what can be predicted by physical laws, and what silly speculations technically avoid disobeying them. Interstellar does nothing to separate what is science from what is fantasy that might not yet be known to be impossible. I don't think Interstellar could be called a science movie. It's not even nearly plausible science fiction. At best it is science fantasy that is speculatively not proven to be completely impossible according to "some 'respectable' scientists".
1. Nothing in the film will violate firmly established laws of physics, or our firmly established knowledge of the universe.
2. Speculations (often wild) about ill-understood physical laws and the universe will spring from real science, from ideas that at least some “respectable” scientists regard as possible.
SPOILER WARNING...
That's what made the film not great. Here are some of the things that made it bad:
- The whole "average farmer/world's greatest pilot gets to fly the space shuttle and save the planet" trope. Why go from "even his kids' teachers don't respect him" to "he's flying manoeuvres that none of the scientists or flight computers knew were possible," over the length of the film? Why does there have to be a single space cowboy superhero who outdoes everyone else in existence and is constantly impressing everyone (and us!) by doing the implausible? In a real scientific movie, such as Apollo 13, great feats were pulled off by teams of cowboys and engineers, with no super-human individuals, but achieving superhuman greatness by all working together as a sum of parts. Why do we need the unlikely "every human is useless except the chosen one" crap?
- Matthew McConaughey. Props to the filmmakers for hiring someone with such a disabling speech impediment, but anyone else would have been a better choice.
- Square robots. Someone really liked the shape of the 2001 A Space Odyssey monoliths, but for robots and TV panel display cases they are entirely inconvenient. The robots are awkward and ungainly. Worse, where they could have showed off how a well-designed robot might adapt to handle different situations, the filmmakers instead get to show off how a poorly designed robot might be (implausibly!) forced to do so, and in doing so become a superhero too, eg. conveniently forming a self-propelled paddle wheel out of its inconvenient big-metal-box components.
- "Dudes, let's surf this gravitational wave!" The science of black holes and junk is fascinating on its own. It doesn't have to be turned into an adventure sport to hold our interest.
Overall I give it a 4/10 thumbs up and would recommend watching it for the visuals and the rare moments of interesting science. If you are not able to easily switch your brain off for the rest of the film to enjoy it despite the dumbness, then I'd avoid this film because it might cause brain damage.
Tuesday, June 10, 2014
Turing Test
The Turing test is an historical milestone goal in artificial intelligence, whereby a machine that passes is able to converse with someone and be indistinguishable from a human. In lay and pop science it is viewed by many as the single defining achievement of AI, but is seen as a distraction by many in the field. An example argument against the Turing test is that human communication isn't the only aspect of intelligence that exists, and reasoning and awareness without modern language could still indicate intelligence. An example argument in favour of it is that a system's intelligence is something that is measured in terms of its behaviour, and if its behaviour is indistinguishable from a known system defined as intelligent, then it is by definition measurably intelligent.
In my opinion, the Turing test isn't a test of intelligence at all, but of ability to mimic intelligence. For that reason alone I think it is more harmful than good, evolving AI work toward robust scripted responses instead of problem solving, cognition, thinking---the "hard" AI.
We're still in the infancy of AI, and it hasn't progressed as quickly as we once imagined (think Hal 9000 envisioned for the year 2001). Imagine if before air planes were invented, someone simply declared that the pinnacle of aircraft design would be if a person could fly between New York and Paris. It is arbitrary and does not directly evaluate design. This is like the Turing test. It has probably endured, because we haven't developed a truly intelligent system yet, and don't even know what it will end up looking like. A test of intelligence will evolve along with the technology, and we're just not there yet.
What might be a better test of artificial intelligence? I think that a more interesting milestone will be reached when an AI, instead of convincing a human that it is a person, is able to convince itself that it is. Surely a system that can think it is intelligent, is?
But then there is the problem that the easier it is to trick a system, the less indicative of intelligence it must be. We could not simply write a program that mimics the belief of introspective intelligence. And then again, how do we evaluate whether any system is mimicking belief, or truly believes? How do we do this when we do not even understand the process in humans? How can we be sure that our own thoughts are not just the product of patterns, of mimicking past thought processes? In that sense, mimicking a person well enough might be a sufficient test of intelligence. If a being (human or machine) convincingly argues that it is thinking or is conscious, and we're unable to probe it to tell if it is just saying so, thoughtlessly producing some programmed output, or is genuinely reasoning, how can we know?
The Turing test evaluates ability to display intelligent behaviour. Another important goal would be to solve a problem (but not a programmed one, or one of a class it is designed to solve. So I suppose the AI would need to figure out how to solve a new problem, and so evolve or rewrite itself, or at least build knowledge and ability). Another is to have self-awareness and feelings.
But the test... how do you test these things? Say an AI passes the Turing test and behaves like a human when probed. How then can one be convinced that it is thinking, having original thoughts, and not just producing them but... thinking... them... and how do we know that humans are really doing anything special anyway? We have our internal experience of thought... How can we prove that, or how do we internally know it's more than boring repetition of patterns?...
If we can't speak for certain about these things, I think we are not yet ready to define an ultimate test of what a true AI would be.
In my opinion, the Turing test isn't a test of intelligence at all, but of ability to mimic intelligence. For that reason alone I think it is more harmful than good, evolving AI work toward robust scripted responses instead of problem solving, cognition, thinking---the "hard" AI.
We're still in the infancy of AI, and it hasn't progressed as quickly as we once imagined (think Hal 9000 envisioned for the year 2001). Imagine if before air planes were invented, someone simply declared that the pinnacle of aircraft design would be if a person could fly between New York and Paris. It is arbitrary and does not directly evaluate design. This is like the Turing test. It has probably endured, because we haven't developed a truly intelligent system yet, and don't even know what it will end up looking like. A test of intelligence will evolve along with the technology, and we're just not there yet.
What might be a better test of artificial intelligence? I think that a more interesting milestone will be reached when an AI, instead of convincing a human that it is a person, is able to convince itself that it is. Surely a system that can think it is intelligent, is?
But then there is the problem that the easier it is to trick a system, the less indicative of intelligence it must be. We could not simply write a program that mimics the belief of introspective intelligence. And then again, how do we evaluate whether any system is mimicking belief, or truly believes? How do we do this when we do not even understand the process in humans? How can we be sure that our own thoughts are not just the product of patterns, of mimicking past thought processes? In that sense, mimicking a person well enough might be a sufficient test of intelligence. If a being (human or machine) convincingly argues that it is thinking or is conscious, and we're unable to probe it to tell if it is just saying so, thoughtlessly producing some programmed output, or is genuinely reasoning, how can we know?
The Turing test evaluates ability to display intelligent behaviour. Another important goal would be to solve a problem (but not a programmed one, or one of a class it is designed to solve. So I suppose the AI would need to figure out how to solve a new problem, and so evolve or rewrite itself, or at least build knowledge and ability). Another is to have self-awareness and feelings.
But the test... how do you test these things? Say an AI passes the Turing test and behaves like a human when probed. How then can one be convinced that it is thinking, having original thoughts, and not just producing them but... thinking... them... and how do we know that humans are really doing anything special anyway? We have our internal experience of thought... How can we prove that, or how do we internally know it's more than boring repetition of patterns?...
If we can't speak for certain about these things, I think we are not yet ready to define an ultimate test of what a true AI would be.
Thursday, January 10, 2013
Evolution will treat hostility with hostility
Here's an idea that might apply to all three of humans vs. nature, humans vs. humans, and disease vs. humans: A system in which one group negatively affects the survival of another group is not stable, even if the hostile group attempts to keep it stable. A hostile entity must either completely eradicate another, or the other will evolve to disrupt the system. This would predict that humans cannot indefinitely harm nature without nature putting a stop to it, and that murderous tyrants cannot maintain power over oppressed people, and that since we try to kill all germs, superbugs will evolve to kill us.
Why? First let's assume that group A has a negative influence on the survival of group B, but with an intention to control its survival rather than wipe it out. Then, any evolved behavior in group B that circumvents death by group A, is an evolutionary advantage. So on the surface, behavior that allows B to "get along" with A is an advantage, but unless it is effective enough to disrupt the system (and make A no longer a negative influence on B's survival) thus making it an unstable system, then it is not good enough to prevent A's influence. It's not good enough for B to change its behavior to adapt to A, because A can also adapt, so if its hereditary advantage is to oppress or control B, A may also evolve to maintain control. This is what should happen if the system is evolving and stable. For example, if new superbugs evolve ways to survive disinfection, we will look for new ways to kill them. So unless the system becomes symbiotic, it is an insufficient evolutionary advantage for B to only find a way to put up with A. A better advantage would be to disrupt A, and disruptive evolved behaviors may provide the only way for B to ensure its survival. It either dies by group A, or it stops group A.
This means that superbugs aren't busy evolving a way to avoid being killed by us, they must be evolving a way to kill us, because only the group that does so will survive. A disease that can take us down will be more successful than a disease that can survive as we look for new ways to kill it.
As per the other examples, it would mean that humans cannot be sustainably harmful to nature, without either destroying it completely or inducing evolution that is harmful to humans. It also suggests that murderers are never really safe. In a stable system, neither group must be trying to kill the other, because only then would there be no certain evolutionary advantage to killing the other first before they kill you.
The hypothesis assumes that such system-disrupting evolved behaviors are always possible, and likely enough to rely on one happening eventually, but I think it's true of the examples given at least.
Why? First let's assume that group A has a negative influence on the survival of group B, but with an intention to control its survival rather than wipe it out. Then, any evolved behavior in group B that circumvents death by group A, is an evolutionary advantage. So on the surface, behavior that allows B to "get along" with A is an advantage, but unless it is effective enough to disrupt the system (and make A no longer a negative influence on B's survival) thus making it an unstable system, then it is not good enough to prevent A's influence. It's not good enough for B to change its behavior to adapt to A, because A can also adapt, so if its hereditary advantage is to oppress or control B, A may also evolve to maintain control. This is what should happen if the system is evolving and stable. For example, if new superbugs evolve ways to survive disinfection, we will look for new ways to kill them. So unless the system becomes symbiotic, it is an insufficient evolutionary advantage for B to only find a way to put up with A. A better advantage would be to disrupt A, and disruptive evolved behaviors may provide the only way for B to ensure its survival. It either dies by group A, or it stops group A.
This means that superbugs aren't busy evolving a way to avoid being killed by us, they must be evolving a way to kill us, because only the group that does so will survive. A disease that can take us down will be more successful than a disease that can survive as we look for new ways to kill it.
As per the other examples, it would mean that humans cannot be sustainably harmful to nature, without either destroying it completely or inducing evolution that is harmful to humans. It also suggests that murderers are never really safe. In a stable system, neither group must be trying to kill the other, because only then would there be no certain evolutionary advantage to killing the other first before they kill you.
The hypothesis assumes that such system-disrupting evolved behaviors are always possible, and likely enough to rely on one happening eventually, but I think it's true of the examples given at least.
Thursday, October 11, 2012
Wikipedia Is a Terrible Reference to Cite
Wikipedia is viewed by many[citation needed] to be an inferior reference, because anyone can edit its pages. I disagree that it is, and find that much relevant information is expertly written, and the fact that it can be corrected by anyone may sometimes improve its reliability.
Referenced information can change through later edits, and that is a problem. If a paper is influential enough to induce changes in an applicable wiki, the paper may end up referencing itself, which we all know can cause pretty serious spacetime anomalies. However, these issues can resolved by referencing a specific dated version of a wiki page.
So it's settled. Citing wikipedia is no problem. I decided to do so and before finishing the paper, found that my first reference no longer existed. That is a problem!
It turned out that the entire topic that I'd referenced was deleted, because it "appears to be original research and has no relevant citations". Unfortunately, old versions of any deleted pages are not publicly visible, in case they contain plagiarized material. The irony of course is that if the page is correctly deleted because it is original material, it is incorrectly hidden because it might not be! In this case, the information must be removed from public sight because it might be both original and copied.
It must be an indication of unreliability if your wikipedia reference ends up deleted. If the wiki is well-cited, it might be better to copy the citations from the wiki rather than reference the wiki itself. If it is not well-cited, it might be better to include the "original research" in your paper.
This is an example of perverse results of the law of unintended consequences; Pages are purged from view to prevent copyright infringement, making it now preferable to copy information from a wiki page than to properly cite it.
Wikipedia seems to be trying to avoid being a citable reference.
Referenced information can change through later edits, and that is a problem. If a paper is influential enough to induce changes in an applicable wiki, the paper may end up referencing itself, which we all know can cause pretty serious spacetime anomalies. However, these issues can resolved by referencing a specific dated version of a wiki page.
So it's settled. Citing wikipedia is no problem. I decided to do so and before finishing the paper, found that my first reference no longer existed. That is a problem!
It turned out that the entire topic that I'd referenced was deleted, because it "appears to be original research and has no relevant citations". Unfortunately, old versions of any deleted pages are not publicly visible, in case they contain plagiarized material. The irony of course is that if the page is correctly deleted because it is original material, it is incorrectly hidden because it might not be! In this case, the information must be removed from public sight because it might be both original and copied.
It must be an indication of unreliability if your wikipedia reference ends up deleted. If the wiki is well-cited, it might be better to copy the citations from the wiki rather than reference the wiki itself. If it is not well-cited, it might be better to include the "original research" in your paper.
This is an example of perverse results of the law of unintended consequences; Pages are purged from view to prevent copyright infringement, making it now preferable to copy information from a wiki page than to properly cite it.
Wikipedia seems to be trying to avoid being a citable reference.
Subscribe to:
Posts (Atom)