Contextuality: Conceptual Issues, Operational Signatures, and Applications
The purpose of this talk is twofold: one, to acquaint the wider community working mostly on Bell-Kochen-Specker contextuality with recent work on Spekkens’ contextuality that quantitatively demonstrates the sense in which Bell-Kochen-Specker contextuality is subsumed within Spekkens’ approach, and two, to argue that one can test for contextuality without appealing to a notion of sharpness which can needlessly restrict the scope of operational theories that could be considered as candidate explanations of experimental data.
In order to perform foundational experiments testing the correctness of quantum mechanics, one requires data analysis tools that do not assume quantum theory. We introduce a quantum-free tomography technique that fits experimental data to a set of states and measurement effects in a generalised probabilistic theory (GPT). (This is in contrast to quantum tomography, which fits data to sets of density operators and POVM elements.) We perform an experiment on the polarization degree of freedom of single photons, and find GPT descriptions of the states and measurements in our experiment.
This talk will be about constraints on any model which reproduces the qubit stabilizer sub-theory. We show that the minimum number of classical bits required to specify the state of an n-qubit system must scale as ~ n(n-3)/2 in any model that does not contradict the predictions of the quantum stabilizer sub-theory. The Gottesman-Knill algorithm, which is a strong simulation algorithm is in fact, very close to this bound as it scales at ~n(2n+1).