# Program of the Symposium

## Program

- Plenary Session [9:00 – 10:30]
- Plenary 1 [11:00 – 12:30]
- Plenary 2 [11:00 – 12:30]
- Plenary 3 [11:00 – 12:30]
- Astroparticle Physics
- Cosmology
- High Energy Physics
- Quantum Gravity
- Mathematical Physics
- Physics Education
- Epistemology and Philosophy
- Frontiers of Fundamental Physics

## Schedule

You don't need to print this webpage: pdf files of the program (for printing or reading on screen) are available in the Download zone.

The rooms mentionned in the program are located on this map of the campus (pdf file). This map will be distributed at the beginning of the Symposium.

I will share some personal thoughts prompted by the recent experimental confirmations of the standard model of particle physics and of the so-called concordance model of cosmology.

In the past 20 years, the Standard Model (SM) of elementary particles and their interactions has provided an unfailing and remarkably accurate description of all experiments with and without high-energy accelerators, establishing that we understand the physics of the very small up to energy scales of 100 GeV. The Large Hadron Collider of CERN, and its experiments, were conceived to probe the physics of the next frontier, that of the TeV energy scale. True to their charge, the experiments have delivered hundreds of significant and often beautiful measurements, along with the discovery of what looks like the first fundamental scalar particle. The triumph of the Standard Model is complete, especially since no new signal has emerged from the intense searches for “new physics” — yet. The field is now at a crossroads: the existence of a Higgs boson opens a new set of questions, while the evidence, both direct and indirect, that there physics beyond the SM does exist, is still strong and convincing. The talk will present a broad-brush picture of how Run 1 of the LHC has shaped the field of High Energy Physics; along with why expectations are still so very high.

Much effort has been devoted to the study of weak scale particles, e.g. supersymmeteric neutralinos, which have a relic abundance from thermal equilibrium in the early universe matching that of the dark matter. This does not however provide any connection to the comparable abundance of *asymmetric* baryons, which must have a non-thermal origin. `Dark baryons' from a hidden sector with a similar asymmetry and mass of ~GeV would naturally provide the dark matter. Low-threshold direct detection experiments are required to find such particles, while monojet searches at colliders provide a complementary probe.

The search for the nature of the dark sector relies on the combination of multiple techniques and probes, from both spectroscopic and photometric data. This matches well with the fact that some probes are intrinsically 3D (like RSD) and some 2D (like WL). But to get the best constraints we need to combine all of these. We show how using angular cross-correlations we can recover the full 3D galaxy clustering information, including BAO and RSD in spectroscopic surveys. This allows the combination of spectroscopic and photometric galaxy surveys, including photo-z error calibration and addition of WL. We show some application of these ideas in current data and simulations and show how overlapping surveys result in both better constrains and better understanding of systematic errors.

A critical assessment of the observed large scale structure will be presented, starting from the Local Group of galaxies within 5 Mpc, out to . Traditional and new probes will be shown to support the standard paradigm of structure formation, but not without raising a few eyebrows. Mild tweaks will be discusses as well as potential constraints on alternative theories of gravity.

Measuring distances to supernovae (or more precisely to the “type Ia” subclass) has allowed us to measure the distance-redshift relation beyond linear order for the first time in 1998. These first results already showed strong evidence for an accelerated cosmological expansion at the present epoch. This has been confirmed by several other cosmological probes, but what causes this accelerated expansion remains elusive. Our ignorance is commonly parametrized using the “equation of state of dark energy”, where dark energy refers to the fluid one can postulate to source the acceleration. I will present how supernova measurements have evolved since the discovery, review the latest dark energy constraints, and discuss the future of the probe.

I will show that Noncommutative Geometry provides an appealing framework for the unification of all fundamental interactions including gravity

This talk will be an introduction to deformation quantization based on geometrical considerations. I will show how the notion of -product emerges from a formula for the multiplication of matrices. I will then give a definition of deformation quantization in the context of Poisson manifolds and conclude by general results about this notion. The talk is meant to a general audience.

General Relativity - in particular if applied to the central mass problem – contains a singularity: The Schwarzschild singularity, which leads to the prediction of black holes.

Our new concept is: No theory should contain singularities. This will also be applied to quantum electrodynamics: In General Relativity it leads us to Pseudo-Complex General Relativity. Black holes disappear and a new “Weltbild” for the cosmos emerges.

After 40 years of active research the question of the fate of information that falls into a black hole is still open [1]. In this talk I will discuss recent results [2] that allow us to compute the entanglement entropy production in black hole evaporation. In particular I present a study of the information release in a model that takes into account the loop quantum gravity resolution of the black hole singularity [3]. The analysis of this phenomenon provides new insights into the entanglement structure of space-time during and after the complete evaporation of the black hole.

*References*

[1] S.W. Hawking, *Breakdown of predictability in gravitational collapse*, Phys. Rev. D **14** (1976) ~10, 2460.

[2] E. Bianchi and M. Smerlak, *Entanglement entropy and negative-energy fluxes in two-dimensional spacetimes*, (2014), arXiv:1404.0602

[3] C. Rovelli and F. Vidotto, *Planck stars*, (2014), arXiv:1401.6562

We review most dynamical constraints on the shape of the gravitational field in the Milky Way, and on its corresponding dark matter halo. We review observational constraints in the central parts (core or cusped distribution), in the solar neighbourhood (local dark matter density) and in the outer parts (virial mass), and also point out various caveats, systematic effects and common misconceptions.

Direct Dark Matter Searches are experiments looking for the energetic recoils due to the scattering of Weakly Interacting Massive Particles (WIMPs) from our galactic halo on nuclei in a terrestrial target. The principles of these type of searches is described, and the status and results of the leading experiments in that field are presented, as well as their prospects in the coming years.

An excess of gamma rays at GeV energies has been detected in the Fermi-LAT data. This signal comes from a narrow region around the Galactic Center and has been interpreted as possible evidence for light (30 GeV) dark matter particles. Focussing on the prompt gamma-ray emission, previous work found that the best fit to the data corresponds to annihilations proceeding into b quarks, with a dark matter profile . In this talk, I will show that this is not the only possible annihilation set-up. More specifically, I will show how including the contributions to the gamma-ray spectrum from inverse Compton scattering and bremsstrahlung from electrons produced in dark matter annihilations, and undergoing diffusion through the Galactic magnetic field, drastically modifies the spectrum, in particular for leptonic final states.

The presentation will be focused on the discussion of the latest results regarding the identification and study of the first star-forming galaxies in the universe, and the constraints derived from these studies on the reionization.

The cosmic dark ages are one of the major frontiers in cosmology. So far, most models of the re-ionization have considered the ultraviolet radiation from massive stars formed in the first galaxies. I will show that high energy sources of X-rays and relativistic jets determined the early thermal history of the universe and maintained it ionized over large volumes of space until the first billon years of the universe, when feedback from super-massive black holes (AGN) took over. High energy sources from compact stellar objects had a direct impact on the properties of the faintest galaxies at high redshifts, the smallest dwarf galaxies in the local universe, and on the cold dark matter model of the universe.

Spatial modulations in the distribution of observed luminosities (computed using redshifts) of around galaxies from the SDSS Data Release , probe the cosmic peculiar velocity field out to [1]. Allowing for luminosity evolution, the -band luminosity function, determined via a spline-based estimator, is well represented by a Schechter form with and . Bulk flows and higher velocity moments in two redshift bins, and , agree with the predictions of the CDM model, as obtained from mock galaxy catalogs designed to match the observations. Assuming a CDM model, we estimate for the amplitude of the linear matter power spectrum, where the low accuracy is due to the limited number of galaxies. While the low- bin is robust against coherent photometric uncertainties, the bias of results from the second bin is consistent with the % magnitude tilt reported by the SDSS collaboration. The systematics are expected to have a significantly lower impact in future datasets with larger sky coverage and better photometric calibration.

*References*

[1] M. Feix, A. Nusser, and E. Branchini, *Tracing the cosmic velocity field at from galaxy luminosities in the SDSS DR7*, ArXiv e-prints (2014) [arXiv:1405.6710].

This talk is based on [1]. The growth index of cosmological perturbations is one of the most performant probes of the nature of Dark Energy (DE), the mysterious mechanism driving the late epoch acceleration of the universe. At variance with classical geometrical observables, such as distances, which only probe the background sector of a cosmological model, this observable provides insight into first order dynamical effects, and it is therefore a key test for modified gravity scenarios often invoked to explain away the DE issue.

In this talk I will first show how the whole information about the growth rate history of linear cosmic structures can be precisely encoded into a small set of growth index parameters whose amplitude can be analytically predicted by theory. Then I will go on demonstrating how these parameters naturally define a space where theoretical predictions can be compared against data in a model independent way. Finally [2], by exploiting the Effective Field Theory of dark energy, a formalism which describes virtually all the gravitational theories containing a single scalar degree of freedom in addition to the metric, I will explore which alternative cosmological scenarios are not in conflict with current growth data.

*References*

[1] H. Steigerwald, J. Bel, C. Marinoni, *Probing non-standard gravity with the growth index: a background independent analysis*, (accepted by JCAP) [arXiv:1403.0898].

[2] F. Piazza, H. Steigerwald, C. Marinoni, *Phenomenology of dark energy: exploring the space of theories with future redshift surveys*, (accepted by JCAP) [arXiv:1312.6111]

In previous editions of Frontiers of Fundamental Physics Symposia, [1,2,3,4,5] we have presented our numerical computations of Cosmic Microwave Background (CMB) anisotropies at high 's. We have adapted our algorithm to calculate such anisotropies through different N-body codes: Particle-Mesh (PM), linear and parallel Adaptative-Particle-Particle-Particle-Mesh (AP3M) Hydra codes. This way we have been able to compute weak lensing, Rees-Sciama (RS) and Sunyaev-Zel'dovich contributions to the CMB anisotropy. The use of parallel AP3M makes more accurate computations [6,7]. In 2006, we computed RS effect using a PM N-body code [8]. In this work, we present the improvements on the computation of RS contribution using parallel Hydra code. We also make some remarks on the coupling of contributions at high 's.

*References*

[1] Fullana, M.J., Saez D.: Making Maps of the Rees-Sciama Effect. Sidharth, B.G., Honsell, F., de Angelis, A. (eds). Frontiers of Fundamental Physics: Proceedings of the Sixth International Symposium “Frontiers of Fundamental and Computational Physics”. Udine, Italy, September 26-29, 2004. Springer, The Netherlands, 115-122 (2006)

[2] Fullana, M.J., Saez, D.: Status of CMB Radiation. Sidharth, B.G., Alfonso-Faus, A., Fullana i Alfonso, M.J. (eds.). Frontiers of Fundamental Physics: Eighth International Symposium FFP8. Madrid, Spain, 17-19 October, 2006. AIP Conference Proceedings **905**, New York, USA,13-22 (2007)

[3] Fullana, M.J., Arnau, J.V., Saez, D.: Weak Lensing on the CMB: Estimations Based on AP3M Simulations. Sidharth, B.G., Honsell, F., Mansutti, O., Sreenivasan, K.R., Angelis, A. de (eds.) Frontiers of Fundamental and Computational Physics: 9th International Symposium. Udine and Trieste, Italy, January 7-9, 2008. AIP Conference Proceedings **1018**, New York, USA, 80-85 (2008)

[4] Fullana, M.J., Arnau, J.V., Thacker R.J., Couchman H.M.P., Sáez D.: Observations and simulations of the CMB temperature anisotropy at very small angular scales. Kouneiher, J., Barbachoux, C., Masson, T., Vey, D. (eds.) Frontiers of Fundamental Physics: The Eleventh International Symposium. Paris, France, July 6-9, 2010. AIP Conference Proceedings **1446**, 252-260 (2012)

[5] Fullana, M.J., Arnau, J.V., Thacker R.J., Couchman H.M.P., Sáez D.: CMB anisotropy computations using Hydra gas code. Sidharth, B.G., Michelini, Sanit, L. (eds.) Frontiers of Fundamental Physics and Physics Education Research: 12th International Symposium. Udine Italy, November 21-23, 2011. Spinger Proceedings in Physics 145, Switzerland, 189-196 (2014)

[6] Fullana, M.J., Arnau, J.V., Thacker R.J., Couchman H.M.P., Sáez D.: Estimating small angular scale Cosmic Microwave Background anisotropy with high-resolution -body simulations: weak lensing. Astrophys. J. **712**, 367-379 (2010)

[7] Fullana, M.J., Arnau, J.V., Thacker R.J., Couchman H.M.P., Sáez D.: A New Numerical Approach to Estimate the Sunyaev-Zel'dovich Effect. Garcia-Parrado, A. et al. (eds.) Progress in Mathematical Relativity, Gravitation and Cosmology. Guimarães, Portugal, September 3-7, 2012. Springer, The Netherlands, 277-282 (2014)

[8] Puchades, N., Fullana, M.J., Arnau, J.V., Sáez, D.: On the Rees-Sciama effect: maps and statistics. Mon. Not. R. Astron. Soc. **370**, 1849-1858 (2006)

The prospects for precision measurements of the properties of the Z, W, Higgs and top particles will be reviewed. The machines considered will be the LHC and its upgrade, the future e+e- colliders, both linear and circular, and the 100 TeV Future Hadron Collider. The importance of improvements in some other experimental inputs and in the theoretical calculations will be addressed.

The electroweak precision measurements offer a very powerful handle on the underlying mechanism of electroweak symmetry breaking and on the nature of the Higgs boson. In this talk, I will present current constraints on new physics beyond the Standard Model from the electroweak precision fit, and discuss the interplay between the electroweak precision fit and the Higgs measurements at the Tevatron and LHC.

I will review recent progresses in the field of higher-order predictions at hadron colliders, with focus on the LHC. The inclusion of higher order corrections, in particular of those corrections related with QCD, is crucial in order to get accurate and reliable predictions which are needed both to validate the Standard Model of fundamental interaction and to seek for yet unknown particles, but leads to a huge growth of the computational complexity. Recent works have lead to the possibility of computing the first subleading order (Next-to-Leading Order, NLO) corrections in a fully automated manner for any process, hiding all the computational complexity to the user. For what concerns higher orders (next-to-next-to Leading Order, NNLO and beyond ), the most relevant processes for LHC physics in the SM have been covered at NNLO in the last few years, with huge efforts from different groups.

The electroweak production of a -boson in association with dijets (), which includes the vector boson fusion process, is an important background to the vector boson fusion production of a Higgs boson in association with dijets. Both ATLAS [1] and CMS [2,3] have made measurements of the electroweak production of . In this talk the methods and results of extracting the electroweak component of the cross section are presented. Detector-corrected distributions of hadronic jets are also presented and show sensitivity to the electroweak production process. The distributions can be used to probe the different approaches to generating both QCD and electroweak events.

*References*

[1] ATLAS Collaboration, *Measurement of the electroweak production of dijets in association with a Z-boson and distributions sensitive to vector boson fusion in proton-proton collisions at* *TeV using the ATLAS detector*, JHEP 1404 (2014) 031 [arXiv:1401.7610].

[2] CMS Collaboration, *Measurement of the hadronic activity in events with a Z and two jets and extraction of the cross section for the electroweak production of a Z with two jets in pp collisions at* *TeV*, JHEP 1310 (2013) 062 [arXiv:1305.7389].

[3] CMS Collaboration, *Measurement of pure electroweak production of a Z boson in association with two forward/backward jets in proton-proton collisions at 8 TeV*, CMS-PAS-FSQ-12-035.

Scattering amplitudes are among the most natural–and important–observables of any field theory in an asymptotically flat space-time, including any theory of quantum gravity. Recently, gravitational scattering amplitudes have been shown to posess remarkably compact expressions which are un-expected from the perspective of traditional Lagrangian-based perturbation theory (c.f., [1]). We will discuss how some of these formulae arise from working with an action principal not on space-time, but rather in *twistor space*, an auxiliary complex three-manifold [2]. This narrative exploits an on-shell equivalence between conformal gravity and general relativity in asymptotically de Sitter manifolds [3], and also leads to new expressions for analogues of scattering amplitudes in space-times with a positive cosmological constant.

*References*

[1] F. Cachazo & D. Skinner, *Gravity from Rational Curves in Twistor Space, Phys.Rev.Lett.,* **110** (2013) 161301 {arXiv:1207.0741}.

[2] T. Adamo & L. Mason, *Conformal and Einstein gravity from twistor actions, Class.Quant.Grav.,* **31** (2014) 045014 {arXiv:1307.5043}.

[3] J. Maldacena, *Einstein gravity from conformal gravity,* (2011) {arXiv:1105.5632}.

After a brief review of unusual features of the light front canonical formulations of field theories, I present the analysis of the canonical structure of the first order formulation of general relativity on a lightlike foliation. It appears to be quite different from the usual spacelike case leading, for instance, to the presence of tertiary constraints. Besides, I discuss the issue of the zero modes and argue that there might be some hidden correspondence with two-dimensional theories.

The spinfoam framework defines transition amplitudes for spin network states of loop quantum gravity. I will review their recent reformulation in terms of spinorial variables allowing to see these amplitudes as coherent state path integrals [1,2,3]. This clarifies their geometrical meaning and at the identification of symmetries: recursion relations for 3nj symbols are turned into Hamiltonian constraints satisfied by the spinfoam amplitudes [4,5]. This applies in particular to the derivation of modified FRW equations for quantum cosmology [6].

*References*

[1] L Freidel and S Speziale, *From twistors to twisted geometries*, Phys. Rev. D **82**, 084041 (2010) arXiv:1006.0199 [gr-qc]

[2] E F Borja, L Freidel, I Garay and E R Livine, *U(N) tools for Loop Quantum Gravity: The Return of the Spinor*, Class. Quant. Grav. **28**, 055005 (2011) arXiv:1010.5451 [gr-qc]

[3] M Dupuis and E R Livine, *Holomorphic Simplicity Constraints for 4d Riemannian Spinfoam Models*, J. Phys. Conf. Ser. **360**, 012046 (2012) arXiv:1111.1125 [gr-qc]

[4] V Bonzom, E R Livine and S Speziale, *Recurrence relations for spin foam vertices*, Class. Quant. Grav. **27**, 125002 (2010) arXiv:0911.2204 [gr-qc]

[5] V Bonzom and E R Livine, *Generating Functions for Coherent Intertwiners*, Class. Quant. Grav. **30**, 055018 (2013) arXiv:1205.5677 [gr-qc]

[6] E R Livine and M Martin-Benito, *Classical Setting and Effective Dynamics for Spinfoam Cosmology*, Class. Quant. Grav. **30**, 035006 (2013) arXiv:1111.2867 [gr-qc]

Spinors have a wide range of applications, from quantum mechanics to particle physics, quantum information and general relativity [1]. In this talk, I will argue that they are useful also for discretized gravity, and present a version of first-order Regge calculus with spinors as the fundamental configuration variables [2]. The underlying action describes a mechanical system with finitely many degrees of freedom, the system has a Hamiltonian, and local gauge symmetries. I will derive the resulting quantum theory, and explain the relation to loop quantum gravity [3].

*References*

[1] R. Penrose and W. Rindler, *Spinors and Space-Time, Two-Spinor Calculus and Relativistic Fields*, vol. 1. Cambridge University Press, Cambridge, 1984.

[2] W. M. Wieland, *Hamiltonian spinfoam gravity*, Class. Quantum Grav. 31 (2014) 025002, arXiv:1301.5859.

[3] C. Rovelli, *Quantum Gravity*, Cambridge University Press, Cambridge, November, 2008

Noncommutative Geometry, i.e., the spectral data of generalized spaces, provides a fruitful approach to the standard model of fundamental interactions. This is done via the spectral action, which is a function of the Dirac operator. It is a regularized trace, cutoff at a scale. This cutoff is stye point in which all gauge interactions are equally strong, and it may represent a phase transition of the theory to a pre geometric phase. I will discuss the role of this field theory cutoff, and the symmetries and structure of space-time that one can infer from the spectral action.

*References*

[1] M. Anselmino, A. Efremov and E. Leader, *The theory and phenomenology of polarized deep inelastic scattering*, ys. Rept. 261 (1995) 1 [Erratum ibid 281 (1997) 399] [hep-ph/9501369].

[2] R. Penrose and W. Rindler, Spinors and Space-time, Vol. 2: Spinor and twistormethods in space-time geometry, Cambridge University Press, Cambridge U.K. (1986), pg. 501.

[3] CMS collaboration, Technical Design Report Vol. 1, CERN-LHCC-2006-001.

Richard Borcherds recently proposed [1] an elegant geometric version of renormalized perturbative quantum field theory in curved spacetimes, where Lagrangians are sections of a Hopf algebra bundle over a smooth manifold. Borcherds' framework provides an algebraic interpretation of causal perturbation theory (Stueckelberg, Bogoliubov, Epstein, Glaser…). We present a noncommutative version of Borcherds approach. Although the normal product is now the tensor product, this version can still be equivalent to standard quantum field theory if the Hopf algebra fiber is graded cocommutative [2]. The relation with almost-commutative geometry [3] is discussed.

*References*

[1] R. E. Borcherds, *Renormalization and quantum field theory*, Algebra & Number Theory 5 (2011) 627.

[2] Ch. Brouder and F. Patras, *Nonlocal, noncommutatie diagrammatics and the linked cluster theorems*, J. Math. Chem. 50 (2012) 552.

[3] A. Connes and M. Marcolli, *Noncommutative Geometry, Quantum Fields and Motives*, American Mathematical Society, Providence, USA. (2008).

Motivated by the introduction of causality in noncommutative geometry, we define the notion of *isocone*. An isocone is a closed convex cone in a -algebra, containing the unit, which separates the states and is stable by non-decreasing continuous functional calculus.

We show that our definition is physically well-motivated, and corresponds exactly to the structure of non-decreasing real functions on a (compact) topological ordered set satisfying a natural compatibility condition between the topology and the partial order, when the -algebra is commutative [1].

We also give the complete classification of isocones in finite dimensional algebras, corresponding to finite noncommutative ordered spaces, and give some examples in infinite dimension [2].

Finally we show that the existence of an isocone on an almost commutative algebra of the kind which appears in the NCG formulation of the Standard Model forces the causal order relation on to disappear in the neighbourhood of every point [3]. Although the scale at which causality disappears is left unspecified by this very general mathematical result, it only depends on the noncommutativity of the algebra , hence on particle physics, and might thus be expected to be much larger than the Planck scale and to possibly leave an observable imprint on the cosmic microwave background.

*References*

[1] F. Besnard, *A noncommutative view on geometry and order*, J. Geom. Phys. **59**, pp. 861–875 (2009) abs/0804.3551

[2] F. Besnard, *Noncommutative ordered spaces : examples and counterexamples*, submitted to J. Geom. Phys., abs/1312.2442

[3] N. Bizi, F. Besnard, work in progress

In general, education, R&D and working life cooperation should form a solid and interactive whole that is able to respond to dynamic and ever-changing expectations. Embedding pedagogical knowledge in innovation activities may offer a long-desired theoretical basis for developing knowledge-based competitiveness in the cooperation between working life and education. Here, the cornerstones of innovation pedagogy are interdisciplinary operations, R&D, curricula and internationalisation in addition to entrepreneurship and service activities.

The key elements here are innovative learning and teaching methods, which can be interlinked with the surrounding working life and innovations by physical products, services and processes. Acting together in an interrelated, interactive and innovative environment, these elements operate within the circle of continuous improvement. In such settings, learning and teaching methods are developed more expediently, working life operations and competitiveness are enhanced and new innovations are created.

The overall aim of innovation pedagogy is to contribute to the development of student’s innovation competencies.

Innovation competencies refer to knowledge, skills and attitudes needed for the innovation activities to be successful.

*What is the state of teaching in higher education? Can we provide evidence of the efficacy of our teaching practices? Are we really satisfied with the results we achieve? Do we actually contribute to develop creativity, initiative, autonomy, risk assessment and taking, collaboration, and do we actually prepare our student to solve complex, often multidisciplinary problems in innovative ways? Can we do better with our limited resources? Is our investment in teaching sufficiently acknowledged and valued? Are the QS rankings more or less relevant than the Shanghai ones, those of the Times, or those of Leiden?*

We frequently hear such questions uttered by disillusioned and demotivated academics. We are, all of us, well aware of the expectations and needs of society, but we often feel that we cannot respond satisfactorily. It is tempting to blame our students (“who aren’t any more like they used to be”), the shortcomings of secondary education, the pressures of competitive research, or even simply “the system”, i.e. the others. Shouldn’t we ask ourselves what we, as academics, could (or should) do to remedy the sorry state of affairs we so often deplore?

Solutions do exist! But we need to question our habits, our opinions, our intuitions, our traditions, our certainties. Also, if we are to change our teaching practices, we must do so based not on fads or on personal biases, but on validated results of well-run scientific experiments.

The talk will provide the author's answers to the following questions:

— What is efficacy in higher education?

— Is higher education efficacious?

— Why should we try to achieve better efficacy in higher education?

— How can we achieve better efficacy in higher education?

— Which conditions should be met in order to achieve better efficacy in higher education?

In France, the teaching of physics at a university level generally follows the classical pattern lecture / tutorials / students’ labs. Within this framework, we developed in our university new teaching units based on a project approach, following a “learning by doing” philosophy; a one semester unit focusing on outreach project (such as street art as a way to engage a public, a board game to discover astrophysics, and so on), and a week-long students’s lab using the Arduino technology and letting the students choose and building their experimental setup. The outreach unit has been running for two years; the latter unit will be tested next year. We will present and discuss these units and how they are perceived by the students, who are not used to this approach. We will also present new ways to talk about physics that can be used outside the university to engage a larger public, developed in collaboration with designers, graphic and web professionals (see www.physicsreimagined.com).

Results from research indicate that many students emerge from traditional undergraduate science courses without having developed a functional understanding of important basic concepts. These findings have motivated changes in undergraduate courses in the sciences, mathematics, and engineering. Many newer strategies are described as “interactive” in that students are expected to engage in discussions with each other, and with professors during class time. Many strategies also involve paying attention to the ideas and beliefs that students have upon entering the classroom, ideas and beliefs that have they developed during previous formal instruction, and through their everyday experience with the natural world. Evaluating the effectiveness of these strategies requires a careful assessment of the goals of instruction and attention to a number of variables, many of which cannot be controlled. Examples will be used to illustrate the process of designing and evaluating instructional approaches and materials. The context will be physics but analogies can be made to other disciplines.

We provide an epistemological analysis of the developments of relativistic cosmology from 1917 to 1965, based on the seminal articles by Einstein, de Sitter, Friedmann, and a special focus on Georges Lemaître, the true father of Big Bang theory. In particular we solve the controversy about the so-called Hubble’s law, that was first presented in a Lemaître’s article of 1927 published in French, but disappeared in its English translation of 1931. It appears that most of the ingredients of the present-day standard cosmological model, such as the accelation of the expansion due to a repulsive dark energy, the interpretation of the cosmological constant as vacuum energy or the possible non-trivial topology of space, had been anticipated by Lemaître, although his papers remain desperately unquoted.

There are some symptoms which indicate a decline of our scientific culture. First, our society is drowned in huge amounts of knowledge. Most of it is about research of little importance to progress our world view or produces no advances in the basic fundamentals of pure science. Instead, we invent countless technical applications or investigate secondary details. Second, in the few fields where some important aspects of unsolved questions have arisen, powerful groups of administrators of science control the flow of information. They have inherent biases resulting in a preference for consensus truths, rather than having objective discussions within a scientific methodology. This process gives few guarantees that we are obtaining solid new truths about nature. Finally, should the current scientific process continue the way it is, individual creativity is condemned to disappear. Indeed, truly creative scientists are substituted by large corporations of administrators and politicians of science specialised in searching ways of getting money from States in megaprojects with increasing costs and diminishing returns.

In 1938 long time before Dicks, Branes and Jordan work on scalar-tensor theory, Einstein and Bergmann introduce the modern viewpoint in which a four-dimensional theory that coincides with Einstein-Maxwell theory at long distances is derived from a five dimensional theory with complete symmetry among all five dimensions. But then they drew back, modifying the theory in a way that spoiled the five-dimensional symmetry and looks contrived to modern readers. The reason was that the more symmetric version of the theory predicts the existence of a new long range field (a massless scalar field), a prediction which Einstein and Bergmann refuse to admit.

We know today that, with their similar, but slightly different predictions for physical phenomena, scalar-tensor theories turned out to be the first significant challenge to Einstein’s theory in over forty years. But as it turned out, standard Einstein theory fared better in these tests than any of the alternatives. In consequence, in the 1970’s, interest in these alternative theories dropped dramatically. However, from the 1980’s on, new discoveries and theories have led to renewed interest in scalar-tensor gravity.

In this lecture I'll back to Begmann-Einstein paper and explain the issue of their drew back and the epistemological statut of the scalar tensor kind theories with respect to the last observations.

The Kerr-Newman black hole solution has many remarkable evidences of its relationships with the structure of the Dirac electron. Extra high spin/mass ratio of the electron corresponds to over-rotating Kerr geometry, singularity of which has to be regulated, forming a soliton-source [1]. There appears principal question: where and how the Dirac equation can be hidden inside the solitonic bag-like source of the Kerr-Newman spinning particle. In this talk we show that soliton represents a domain wall bubble confining the Higgs field in false vacuum state. The Kerr theorem creates two massless spinor fields (Weyl spinors) which interact via Yukawa coupling and turn into Dirac field, acquiring mass term from the Higgs field [3].

*References*

[1] A. Burinskii, *Regularized Kerr-Newman Solution as a Gravitating Soliton*,J. Phys. A: Math. Theor. **43**, 392001, (2010) [arXiv: 1003.2928].

[2] A. Burinskii, *What tells Gravity on the shape and size of an electron* Physics of Particles and Nuclei, **45** (1) (2014) 202. DOI:10.1134/S106377961401016X [arXiv:1212.2920].

[3] A. Burinskii, *Kerr-Newman electron as spinning soliton*, in: Proceedings of the “Advanced school on quantum foundations and open quantum systems” will be published in the Int. J. of Mod. Phys. A.

I review mathematical structures of the De Donder-Weyl (DW) Hamiltonian theory for fields, which does not require space-time decompositions. Those structures: the polysymplectic form, Poisson-Gerstenhaber brackets of differential forms, which represent dynamical variables, the bracket form of the DW Hamiltonian field equations, and the fundamental brackets, allow us to formulate a generalization of canonical quantization to the DW framework. Because the structures of DW Hamiltonian theory are intermediate between the Lagrangian level and the canonical Hamiltonian level, and they don't require splitting into the space and time and don't consider the infinite dimensional spaces of field configurations, the corresponding approach is called precanonical quantization. I show how the quantization of fundamental brackets leads to a quantum theoretic formalism were the wave functions and operators are space-time Clifford algebra valued, and the role of the Schroedinger equation is played by a Dirac-like equation with the operator of the DW Hamiltonian function replacing the mass term. I briefly show how this approach allows to reproduce few simple results of quantum field theory and how the standard quantum field theory appears as a limiting case when the elementary volume parameter , which appears in precanonical quantization, tends to zero (more details will be given in the talk at the session on mathematical physics). After this review, I will show how the approach of precanonical quantization works for quantization of gravity in the first order vielbein - spin-connection formalism. It will include a generalization of DW hamiltonian theory to the singular case, its quantization, formulation of the generalized Schroedinger equation for the wave function on the bundle of spin-connection coefficients over space-time, and the definition of the appropriate Hilbert space. In this framework the quantum space-time is discussed by the transition amplitudes between the values of spin-connection at different points of space-time, which can be determined from the equations of the theory. I will also discuss simple applications of the results in the context of quantum cosmology.

*References*

[1] I. Kanatchikov, *De Donder-Weyl Hamiltonian formulation and precanonical quantization of vielbein gravity, * J. Phys.: Conf. Ser. 442 (2013) 012041, arXiv:1302.2610.

[2] I. Kanatchkov, *Precanonical quantization: from foundations to quantum gravity, * in preparation.

[3] I.V. Kanatchikov, *On the precanonical structure of the Schrödinger wave functional, * arXiv:1312.4518; *Precanonical Quantization and the Schrödinger wave functional revisited, * arXiv:1112.5801, to appear in { Adv. Theor. Math. Phys.}

[4] I. Kanatchikov, *On a generalization of the Dirac bracket in the De Donder-Weyl Hamiltonian formalism,* arXiv:0807.3127.

[5] I. Kanatchikiov, *Geometric (pre)quantization in the polysymplectic approach to field theory,* hep-th/0112263.

The exam of the literature on the large-scale astrophysical limits for the photon mass inspires a critical attitude and prompts to question whether these limits are nothing more than the outcome of idealized models. This view is confirmed by Goldhaber and Nieto [1] who state “Quoted photon-mass limits have at times been overly optimistic in the strengths of their characterizations. This is perhaps due to the temptation to assert too strongly something one `knows' to be true. A look at the summary of the Particle Data Group [...] hints at this.”

We use spacecraft data in the solar wind at 1 AU to estimate the mass upper limit of the de Broglie-Proca's photon, by looking for deviations from the Ampère's law. We take advantage of the Cluster spacecraft which both allow the direct computation of from simultaneous four-point measurements of the magnetic field and provide measurements of particle currents. We estimate the upper bound for the mass to be { } kg without using any *ad hoc* model. Finally, we discuss how this limit can be lowered and compare with currently accepted values in the solar wind.

*References*

[1] Goldhaber and Nieto, Rev. Mod. Phys. 82, 939 (2010)

[2] A. Retinò, A. D. A. M. Spallicci and A. Vaivads, arXiv:1302.6168v2 [hep-ph] (2014).

It was argued earlier that Dodson-Zeeman fuzzy topology (FT) represents the possible mathematical basis for quantum space-time structure. Here the quantization formalism related to it will be described.

As the example, the quantization of massive particles is considered, it's shown that the coordinate uncertainty is generic in FT. FT fundamental set is Poset, so that some its element pairs in place of standard ordering relation , can obey to incomparability relation: . For illustration, consider discrete Poset , which includes the subset of incomparable elements , and the subset which is maximal totally ordered subset. indexes grow correspondingly to their ordering, *i.e.* , . Suppose that for some and interval , . In this case is “smeared” over interval, which is analogue of coordinate uncertainty, if to regard as “coordinate axe”. Analogously to it, -dimensional model Universe corresponds to Poset where is the massive particle subset, -continuous ordered subset , which describes 1-dimensional euclidian geometry. If for some and interval the relation holds for all , then possess -coordinate uncertainty of the order . To detalize characteristics, the corresponding fuzzy weight introduced with the norm , so that value indicates where on axe is mainly concentrated. In this framework corresponds to the formal definition of fuzzy point and of fuzzy set.

In such approach massive particle can be described as the evolving fuzzy point of . It's shown then that the corresponding normalized density evolves according to the flow continuity equation: where is flow local velocity. The independent parameters , which characterize state, can be unambiguously mapped to normalized complex function . Assuming space-time shift invariance, it's proved that evolution obeys to Schrödinger equation for arbitrary mass , such theory can be also extended for -dimensional case. It's proved also that in relativistic case evolution described by Dirac equation for spin . Particle's interactions on such fuzzy manifold are shown to be gauge invariant, the interactions of fermion muliplets are performed by Yang-Mills fields.

*References*

[1] Dodson, C.J.T. (1974), *Bull. London Math. Soc.* **6**, 191

[2] Mayburov, S. (2008), *J. Phys. A* **41**, 164071

[3] Bandmeyer, H. and Gottwald, S. (1995), *Fuzzy Sets, Fuzzy Logics, Fuzzy Methods with Applications* (Wiley, N-Y)

[4] Mayburov S. (2012) *Phys. Part. Nucl.* (2012) **43**, 465 ; hep-th:1205.3019

[5] Mayburov S. (2010) *Int. J. Theor. Phys.* **49**, 3192

The knowledge of the high velocity tail of the WIMP velocity distribution has strong impact on the way direct detection (DD) may constrain or discover light WIMPs in the GeV mass range. Recently, there has been important observational efforts to measure the so-called Galactic escape speed at the position of the Earth, like for instance the analysis released in late 2013 by the RAVE Collaboration. We revisit the estimate of the astrophysical uncertainties on DD limits and sensitivity from this perspective. We show that a treatment that consistently correlates all relevant parameters (local circular velocity, dark matter density, distance to the Galactic center, escape speed) leads to significantly reduced systematic errors in DD sensitivity. We go beyond the Maxwellian velocity approximation generally used to compute the DD limits by means of ergodic distribution functions, where the previously mentioned correlations are automatic and result from a minimal set of input parameters. Finally, we test our semi-analytical model against cosmological simulations and discuss its relevance. We show that it provides a consistent and simple way to derive meaningful astrophysical uncertainties.

The first results of the AMS experiment on board the International Space Station corresponding to the first two years of data taking will be presented. A measurement of the positron fraction in primary cosmic rays in the energy range from 0.5 GeV upwards, as well as the combined and individual electrons and positrons spectra will be expounded. The main features of these analysis will be explained and some perspectives on the future of these measurements will be given.

We study three high resolution cosmological hydrodynamical simulations of Milky Way-sized halos including a comparison with the corresponding DM-only runs performed with the adaptive mesh refinement code RAMSES. We analyse the stellar and gas distribution and find one of our simulated galaxies with interesting Milky Way like features with regard to several observational tests. Thanks to consistently tuned star formation rate and supernovae feedback, we manage to obtain an extended disk and a flat rotation curve with the circular velocity and the dark matter density in the solar neighbourhood in agreement with observations. With a careful look at the derivation of the stellar-to-halo mass ratio, we also obtain competitive values for this criterion. Concerning the dark matter distribution, we explicitly show the interaction with the baryons and show how the dark matter is first contracted by star formation and then cored by feedback processes. In this consistent framework, we also study dark matter detection phenomenology which will be the subject of subsequent papers.

*References*

[1] Pol Mollitor, Emmanuel Nezri, Romain Teyssier, Baryonic and dark matter distribution in cosmological simulations of spiral galaxies. http://arxiv.org/abs/1405.4318

The properties of clusters are known to provide critical information of cosmological relevance. I will discussed the various observationnal probes on clusters and how physical models of clusters can be build based on simple scaling laws. I will show haw the standard predictions of these models within the concordance picture conflict with present day observations, leading to the *cluster conendrum*: x-ray temperature distribution of clusters at different redshift conflicts with standard scaling expectations, SZ clusters counts as obtained by PLANCK lead to cosmological parameters in tension with those obtained from the CMB fluctuations. Solutions to evade this problem will be discussed as well as ways to progress on this issue.

*References*

[1] Blanchard, A., Evidence for the fifth element. Astrophysical status of dark energy, A&A\,Rev., **18**, 595 (2010)

[2] Blanchard, A., & Douspis, M., Evidence for new physics from clusters? , A&A **436**, 411 (2005)

[3] Linder, E. V., Cosmology with x-ray cluster baryons, JCAP **4**, 4 (2007)

I will review how, with the caustic technique, we can measure the mass profile of galaxy cluster out to their infall regions where dynamical equilibrium does not necessarily hold. I will then show how we can use the caustic technique to directly measure the mass accretion rate of clusters, a key prediction of structure formation models. I will present the first measures of the accretion rate of clusters, extracted from the CIRS and HeCS catalogues, out to redshift .

Clusters are known to be a powerful probe of cosmology; however, their use is also known to be limited by the degeneracy that exist between their mass and their temperature. Here, we show that using the CMB temperature fluctuations (as measured by Planck) in conjunction with the temperature distribution function of local clusters, allows to break this degeneracy for a specified model (such as the concordance LCDM). The consistency of the aforementioned model can then be tested on baryon fraction and Sunyeav-Zeldovich scaling laws.

Latest LHC results of the top quark production are presented. With the focus on the ATLAS and CMS experiment I will present top quark production cross-section in the different channels and also the latest results on the single top production, giving the state-of-art measurements. I also include differential cross-section as well as top quark pair production in association with W/Z bosons.

The top quark mass is the best known quark mass. This talk will be a quick review of the measurements of the top mass at the four main collider experiments, CDF and DZERO from the Tevatron accelerator located at Fermilab, and ATLAS and CMS from the Large Hadron Collider located at CERN. Besides results at brief survey of techniques and future prospects will be given.

An overview of the charge asymmetry in top-antitop quark production at hadron colliders is presented. I discuss the origin of the charge asymmetry in the Standard Model and potential contributions from new physics [1,2] in the light of the enhanced asymmetry observed by the CDF collaboration at Tevatron. A measurement of the charge asymmetry at the LHC requires tailored observables, which cope with the large symmetric background [3].

*References*

[1] M. Bauer, F. Goertz, U. Haisch, T. Pfoh and S. Westhoff, *Top-Quark Forward-Backward Asymmetry in Randall-Sundrum Models Beyond the Leading Order*, JHEP **1011**, 039 (2010) [arXiv:1008.0742 [hep-ph]].

[2] U. Haisch and S. Westhoff, *Massive Color-Octet Bosons: Bounds on Effects in Top-Quark Pair Production*, JHEP **1108**, 088 (2011) [arXiv:1106.0529 [hep-ph]].

[3] S. Berge and S. Westhoff, *Top-Quark Charge Asymmetry Goes Forward: Two New Observables for Hadron Colliders*, JHEP **1307**, 179 (2013) [arXiv:1305.3272 [hep-ph]].

The spin foam model is based on a BF-type action restricted by the simplicity constraint. However, the solutions to the simplicity constraint fall into five different sectors. In [1,2] it was argued that a certain mixing of these sectors and the freedom of choosing a tetrad orientation generates undesired terms in the asymptotic of the Euclidean EPRL-model and can be cured by an additional constraint. We here show that this is also the case for the Lorentzian model. This is joint work with J. Engle.

*References*

[1] J. Engle, *A spin-foam vertex amplitude with the correct semiclassical limit*, Phys.Lett. B724 (2013) 333-337.

[2] J. Engle, *A proposed proper EPRL vertex amplitude*, Phys.Rev D87 (2012) 084048.

Starting from the guiding principles of spacetime locality and operationalism, we consider a general framework for a probabilistic description of nature. Crucially, no notion of time or metric is assumed, neither any specific physical model. Remarkably, the emerging framework converges with a recently proposed formulation of quantum theory [1], obtained constructively from known quantum physics.

*References*

[1] R. Oeckl, *A positive formalism for quantum theory in the general boundary formulation*, Found. Phys. 43 (2013) 1206–1232 [arXiv:1212.5571].

The non-commutative torus is commonly described as a cocycle quantization of the group (-)algebra of the abelian group . In the first part of the talk I will explain how, using the WBZ transform of solid state physics, finitely generated projective modules over the noncommutative torus can be interpreted as deformations of vector bundles on elliptic curves by the action of a -cocycle, provided that the deformation parameter of the NC-torus and the modular parameter of the elliptic curve satisfy a non-trivial relation.

I will then discuss the relation between (formal) deformations of vector bundles on the torus and cochain twists based on the Lie algebra of the -dimensional Heisenberg group.

Based on a joint work with G. Fiore and D. Franco.

Various aspects of fundamental physics – like renormalization in quantum field theory or some considerations about quantum gravity – point towards the existence of a cut-off in either the momentum or the position space. As a consequence, usual geometrical notions such as points and geodesic distance loose their meaning. We will show how noncommutative geometry provide interesting tools to study the geometry of such “cut-off spaces”, including topological and metric aspects.

The standard “gauge principle” or “gauge argument” produces an exact potential and a vanishing field . Weyl [1,2] has his own gauge argument, which is sketchy, archaic and hard to follow; but at least it produces a curved potential and nonvanishing field . I attempt a reconstruction.

*References*

[1] Weyl, H. (1929) “Elektron und Gravitation” *Zeitschrift fuer Physik* **56**, 330-52

[2] Weyl, H.(1929) “Gravitation and the electron” *The Rice Institute Pamphlet* **16**, 280-95

We propose a group-theoretical interpretation of the fact that the transition from classical to quantum mechanics entails a reduction in the number of observables needed to define a physical state (e.g. from *and* to *or* in the simplest case). We argue that, in analogy to gauge theories, such a reduction results from the action of a symmetry group. To do so, we propose a philosophical-oriented analysis of formal tools coming from symplectic geometry and group representation theory, notably Souriau's *moment map*, the *Mardsen-Weinstein symplectic reduction*, the *symplectic “category”* introduced by Weinstein, and the conjecture (proposed by Guillemin and Sternberg) according to which “quantization commutes with reduction”. By considering the case of an abelian Hamiltonian action on a symplectic manifold, we argue that *phase invariance in quantum mechanics and gauge invariance have a common geometric underpinning, namely the symplectic reduction formalism*. This stance points towards a gauge-theoretical interpretation of Heisenberg indeterminacy principle. We revisit (the extreme cases of) this principle in the light of the difference between the *set-theoretic points* of a phase space and its category-theoretic *symplectic points*.

*References*

[1] Abraham R., Marsden, J.E.: Foundations of Mechanics ( edition). Addison-Wesley Publishing Company (1978).

[2] Bursztyn H., Weinstein, A.: Poisson geometry and Morita equivalence. ArXiv:math/0402347 [math.SG] (2004).

[3] Catren G.: Quantum Foundations in the Light of Gauge Theories. To appear in C. de Ronde, S. Aerts and D. Aerts (eds.), Probing the Meaning of Quantum Mechanics: Physical, Philosophical,and Logical Perspectives, World Scientific (2014).

[4] Catren G., *Can Classical Description of Physical Reality be Considered Complete?* In M. Bitbol, P. Kerszberg and J. Petitot (eds.), *Constituting Objectivity: Transcendental Approaches of Modern Physics*, The Western Ontario Series in the Philosophy of Science, Vol. 74, Springer-Verlag, Berlin, 375-386 (2009).

[5] Catren G., *On Classical and Quantum Objectivity*, Foundations of Physics, 38, 470-487 (2008).

[6] Dirac P.M., *Lectures on Quantum Mechanics*. Dover Publications, New York (1964).

[7] Guillemin V., Sternberg S., *Geometric Quantization and Multiplicities of Group Representations*, Invent. math., **67**, 515-538 (1982).

[8] Henneaux M., Teitelboim, C. *Quantization of gauge systems*, Princeton University Press, New Jersey (1994).

[9] A.A. Kirillov, *Lectures on the Orbit Method*, Graduate Studies in Mathematics, Vol. 64, AMS (2004).

[10] Kostant B., Sternberg S., *Symplectic Reduction, BRS Cohomology and Infinite Dimensional Clifford Algebras*, Annals of Physics 176, 49-113 (1987).

[11] Landsman N.P., *Mathematical Topics Between Classical and Quantum Mechanics*, Springer Monographs in Mathematics, Springer-Verlag, New York (1998).

[12] J.E. Marsden, T.S. Ratiu, *Introduction to Mechanics and Symmetry* 2 edition. Springer-Verlag, New York (1999).

[13] J.E. Marsden, A. Weinstein, *Reduction of Symplectic Manifolds with Symmetry*, Reports on Mathematical Physics 5, 121–130 (1974).

[14] J.-M. Souriau, *Structure of Dynamical Systems. A Symplectic View of Physics*, Birkhäuser Boston (1997).

[15] Weinstein A., *Symplectic Geometry*, Bulletin (New Series) of the American Mathematical Society, Vol.5, N1 (1981).

[16] Xu, P.: Classical Intertwiner Space and Quantization. Commun. Math. Phys. 164, 473–488 (1994).

[17] Xu, P.: Morita equivalence of Poisson manifolds. Commun. Math. Phys. 142, 493–509 (1991).

The anticommutation of Fermionic fields raises the problem of simulating the evolution of Fermionic systems by means of commuting quantum systems, say qubits. We tackle [1,2] the issue of retaining locality of Fermionic operations considering local Fermionic modes as the elementary systems of an operational probabilistic theory. The locality of Fermionic operations, namely operations on systems that are not causally connected must commute, implies the parity superselection rule that inhibits the superposition of states with an even and an odd number of excitations. As a result the Fermionic theory lacks two distinctive traits of quantum theory, the local tomography and the monogamy of the entanglement.

*References*

[1] D'Ariano, Giacomo Mauro, Franco Manessi, Paolo Perinotti, and Alessandro Tosini, *The Feynman problem and Fermionic entanglement: Fermionic theory versus qubit theory*, accepted on Int. J. Mod. Phys. **A**, arXiv preprint arXiv:1403.2674 (2014).

[2] D'Ariano, Giacomo Mauro, Franco Manessi, Paolo Perinotti, and Alessandro Tosini, *Fermionic computation is non-local tomographic and violates monogamy of entanglement*, arXiv preprint arXiv:1307.7902 (2013).

This talk is based on [1, 2]. We propose a very simple reformulation of General Relativity, which completely sequesters from gravity *all* of the vacuum energy from a matter sector, including all loop corrections and renders all contributions from phase transitions automatically small. The idea is to make the dimensional parameters in the matter sector functionals of the 4-volume element of the universe. For them to be nonzero, the universe should be finite in spacetime. If this matter is the Standard Model of particle physics, our mechanism prevents any of its vacuum energy, classical or quantum, from sourcing the curvature of the universe. The mechanism is consistent with the large hierarchy between the Planck scale, electroweak scale and curvature scale, and early universe cosmology, including inflation. Consequences of our proposal are that the vacuum curvature of an old and large universe is not zero, but very small, that is a transient, and that the universe will collapse in the future.

*References*

[1] N. Kaloper and A. Padilla, Sequestering the Standard Model Vacuum Energy,” Phys.\ Rev.\ Lett.\ **112** (2014) 091304 [arXiv:1309.6562 [hep-th]].

[2] N. Kaloper and A. Padilla, `Sequestering the Standard Model Vacuum Energy,” to appear

Sketched out in 1992, selected by ESA in 1996, launched in 2009, Planck delivered on March 21st its first full sky maps of the millimetric emission at 9 frequencies, as well as those which follow from them, and in particular Planck map of the anisotropies of the Cosmic Microwave Background (CMB). The later displays minuscule variations as a function of the observing direction of the temperature of the fossile radiation around its mean temperature of 2.725K. I will briefly describe how these high resolution maps with a precision of a few parts in a million have been obtained, from collection to analysis of the first 500 billion samples of our HFI instrument.

CMB anisotropies reveal the imprint of the primordial fluctuations which initiate the growth of the large scale structures of the Universe, as transformed by their evolution, in particular during the first 370 000 years, i.e. till the Universe became transparent and the forming of the image we record today. The statistical characteristics of these anisotropies allow constraining jointly the physics of the creation of the primordial fluctuations and that of their evolution. They teach us the possible value of the parameters of the models which we confront to data. I will describe Planck estimates of the density of the constituents of the Universe (usual matter, cold dark matter or CDM, dark energy...), and their implication in terms of derived quantities like the expansion rate or the spatial curvature. I will review what we learnt on the generation of the fluctuation, and wil discuss extensions of the standard cosmological model, so called “Lambda-CDM”, both in term of non minimal physical models – multi-field inflation for instance, or additional constituents - like cosmic strings or a fourth neutrino.

Finally, it will briefly describe other promising results on the matter distribution which is travelled through by the CMB image on its long 13.7 billion years trip towards us. I will mention in particular what we can learn on the dark matter distribution - which is detected through its distorting effet of the CMB image by gravitationnal lensing, or that of hot gaz, which is revealed by the spectral distortion it induces.

The two main messages from the LHC, after its first phase, are the discovery of the Higgs-like particle and no evidence for any BSM physics. This stunning, continuous, success of the SM up to the mass scales of order 0(1 TeV) is very puzzling. Although with the discovery of the Higgs particle, the SM is a consistent theory that can be extrapolated up to the Planck scale, it leaves unanswered several well known experimental and theoretical questions. In particular, the naturalness of the weak scale as the guiding principle for BSM physics is now somewhat challanged. From the historical perspective, the concept of naturalness in particle physics is a crucial issue and it should not be abandoned too quickly. After the lessons from the LHC, supersymmetry still remains to be the leading candidate for BSM physics. Other BSM scenarios and the near term experimental prospects for discovering supersymmetric or non-supersymmetric BSM physics will also be briefly reviewed.

Weak gravitational lensing by inhomogeneities along the line-of-sight alters the shapes, sizes and fluxes of distant sources such as galaxies, and distorts the pattern of continuous fields such as the microwave background radiation. In this talk I will review this relatively young scientific field, with particular emphasis on its power for studying dark energy and modified gravity. I will discuss the challenges it faces, summarize results from recent survey analyses, and finally consider the prospects for the future

As microwave background photons propagate from the surface of last scatter to our telescopes, they are affected by four distinct processes in the low redshift universe: gravitational lensing, the thermal Sunyaev-Zeldovich (tSZ) effect, the kinematic Sunyaev Zeldovich (kSZ) effect and the intervening Sachs Wolfe effect (ISW). This talk will focus on the kSZ and ISW effect. I will discuss the cross-correlations between the large-scale distribution of galaxies and these two effects and show how current and future measurements can be used to probe the growth rate of structure and gravitational physics on large-scales.

After more than fifteen years, the discovery that the Universe is accelerating emerges as one of the turning points in the history of cosmology, as witnessed by the 2011 Nobel Prize in Physics to Perlmutter, Riess and Schmidt. Yet, the origin of the accelerated expansion is a mystery. One possibility is that the Universe is permeated by a “dark energy” producing a kind of gravitational repulsion. Alternatively, perhaps the very equations of General Relativity need to be modified or generalized to higher-dimensional worlds.

Galaxy redshift surveys are one of the experimental pillars that contributed significantly to build this overall scenario and even larger projects are ongoing or planned, with the goal of understanding the nature of cosmic acceleration. In my talk I will review this situation and show how redshift surveys allow us to possibly break the degeneracy between dark energy and modified gravity by measuring both the expansion rate and the growth rate of structures. I will present recent examples, including results from the new VIPERS survey at the ESO Very Large Telescope. I will then discuss status and plans for the ultimate dark-energy experiment , the ESA satellite Euclid, which is due to launch in 2020. Euclid promises to reach percent accuracies on the measurement of cosmological parameters, with unprecedented control of systematic effects.

The theoretical and phenomenological status of neutrino physics is reviewed as well as the cosmological implications for dark matter, inflation and the baryon asymmetry.

The top quark is the only fermion whose mass resides at the electroweak scale. Its role in the SM and in models of new physics together with its rich phenomenology provide a unique opportunity for exploring the TeV scale. In this talk I review the status of top-quark measurements after the Run I of the LHC and the exciting opportunities ahead in light of the possibility for the top quark to be a portal to new physics.

The IceCube project has transformed one cubic kilometer of natural Antarctic ice into a neutrino detector. The instrument detects 100,000 neutrinos per year in the GeV to PeV energy range. Among those, we have recently isolated a flux of high-energy cosmic neutrinos. I will discuss the instrument, the analysis of the data, and the significance of the discovery of cosmic neutrinos.

We exploit the recently found exact solution of the quantum constraints of loop quantum gravity in vacuum with spherical symmetry to analyze a quantum field theory living on the quantum space time. The main effect of the quantum background is to lead to field equations that are discrete for the quantum field theory. The Hartle-Hawking, Unruh and Boulware vacua are all recovered with small modifications, but the discrete structure eliminates all infinities asssociated with physical quantities computed on the vacuum. We also briefly address the issue of Lorentz invariance and the emergence of limitations on the matter content of the theory.

*References*

[1] R. Gambini, J. Pullin, arXiv: 1312:3595, to appear in CQG.

We briefly comment upon the parallel between graphene and high energy fermions and explore the possibility of using the former as a test bed for the latter rather like Reynold's numbers in a wind tunnel. We also point out that there are parallels to Quantum Gravity approaches, which indeed provide a novel explanation for such effects as the FQAE.

I will present the current status of the global neutrino data analysis, pointing out its unique role in constraining the two crucial (still) unknown parameters: the CP-violating phase and the octant. In this context, I will discuss the slight overall preference for in the first octant and for non-zero CP violation with . The (in-)stability of such intriguing indications within different combinations of data sets will be considered in detail. Finally, I will discuss the robustness of the 3-flavor analysis in the presence of new light sterile neutrinos, whose existence is suggested by very-short-baseline experiments and cosmological data.

The Neutrino Oscillation phenomenon has been fully established manifestation of both the non-trivial mixing in the leptonic sector, characterised by the PMNS (3x3 matrix in the 3 neutrino type scenario) and the non-degenerate mass state spectrum of neutrinos. As a consequence, we know that mixing is part of the phenomenology of both quarks and leptons, including CP violation, although their mixing patterns have so far been measured to be very different and the possible physics behind remains unknown. Evidence for neutrino CP violation has not yet been observed, due to lack of experimental sensitivity. Also, thanks to the observation of neutrino oscillations, neutrinos are known to be massive. The absolute mass value is still unknown, being much lower than all other known fermions. In my talk, I will briefly review the experimental status of our knowledge as of the Neutrino 2014 (June 2014 at Boston, USA) conference. Improvements due to foreseen experiments will be also highlighted.

I will review the status of thermal leptogenesis and how this, in combination with low energy neutrino data, can be used as a powerful tool to probe models of new Physics. I will discuss how the possibility to realise independence of the initial conditions (strong thermal leptogenesis) relies on a special right-handed neutrino mass pattern and require quite special conditions, in particular a deviation of light neutrino masses from the hierarchical limit that can be tested in future years with cosmological observations. Intriguingly the set of conditions for strong thermal leptogenesis can be realise within SO(10)-inspired leptogenesis and in this case one obtains a set of definite predictions on future low energy neutrino measurements. Interestingly this solution requires non vanishing reactor mixing angle, now robustly established in reactor neutrino experiments, and a negative neutrino oscillation CP violation invariant parameter, as hinted by recent neutrino oscillation experimental results.

The observation of a neutrinoless double beta decay would be the proof that the neutrino is a Majorana particle, identical to its own anti-particle, and would correspond to a process violating the leptonic number. In this talk, I will give a review of the various experiments which are searching for a neutrinoless double beta decay.

We find imprints of galactic structures viz. the `radio loops'[1] in the derived maps of the cosmic microwave background, even at microwave frequencies where emission by dust dominates [2]. The emission mechanism may be magnetic dipole radiation from dust grains enriched by metallic iron, or ferrimagnetic molecules [3]. This new foreground is present at high galactic latitudes, in particular in theBICEP2 observational window, and potentially dominates over the expected B-mode polarisation signal due to primordial gravitational waves from inflation .

*References*

[1] P. Mertsch and S. Sarkar, *Loops and spurs: The angular power spectrum of the Galactic synchrotron background*, JCAP **1306** (2013) 041; arXiv:1304.1078 [astro-ph.GA].

[2] H. Liu, P. Mertsch and S. Sarkar, *Fingerprints of Galactic Loop I on the Cosmic Microwave Background*, submitted to Astrophys. J. Lett.; arXiv:1404.1899 [astro-ph.CO].

[3] B.T. Draine and A. Lazarian, *Magnetic dipole microwave emission from dust grains*, Astrophys. J. **512** (1999) 740-754 ;arXiv:astro-ph 9807009.

We will discuss the role of symmetries in three different areas of large scale structure:

1. how to test the equivalence principle using black holes in centers of galaxies;

2. how to measure gravitational redshifts using parity-violating signatures in correlation functions;

3. how spontaneously broken symmetries give us fully non-perturbative relations between and point functions.

The galaxy clustering ratio [1,2] is a new clustering statistic that provides access to characteristic parameters of the power spectrum of mass density fluctuations without the need to specify the galaxy biasing function nor a model for peculiar velocity distortions. I will demonstrate the method using galaxy simulations as well as real redshift data (from the VIPERS and the SDSS surveys) and show that it allows to fix the reduced density of matter to a precision of nearly 6%. I will argue that this approach will be instrumental in searching for evidences of new physics beyond the standard model of cosmology and in planning future redshift surveys such as EUCLID.

*References*

[1] J. Bel, C. Marinoni, 2014, A&A, 563, 36

[2] J. Bel, C. Marinoni, B. Granett et al. (the VIPERS collaboration), 2014, A&A 563, 37

In this talk I will show how local measurements of the Hubble expansion rate are affected by structures like galaxy clusters or voids. In particular, I will present a fully relativistic treatment of this effect, studying how clustering modifies the mean distance (modulus)-redshift relation and its dispersion in a standard CDM universe. Our findings is that cosmic variance (i.e. the effects of the local structure) is, for supernova observations at small redshifts () of the same order of magnitude as the current observational errors.The cosmic variance has to be added to this observational uncertainties and it reduces the tension with the CMB measurement of .

*References*

[1] I. Ben-Dayan, R. Durrer, G. Marozzi and D.J. Schwarz, arXiv:1401.7973 [astro-ph.CO].

We constrain several models of the early Universe that predict statistical anisotropy of the cosmic microwave background (CMB) sky. We make use of WMAP9 and Planck maps deconvolved with beam asymmetries. As compared to previous releases of CMB data, they do not exhibit the anomalously large quadrupole of statistical anisotropy. This allows to strengthen the limits on the parameters of models established earlier in the literature. Among the others, we discuss constraints on the class of models with the Maxwellian term non-minimally coupled to the inflaton and a row of alternatives to inflation, i.e., Galilean genesis and conformal rolling scenario. The talk is based on the results obtained in the papers [1,2,3].

*References*

[1] S.R. Ramazanov and G. I. Rubtsov, *Statistical anisotropy of CMB as a probe of conformal rolling scenario*, JCAP **1205** (2012) 033, arXiv:1202.4357 [astro-ph.CO].

[2] S. R. Ramazanov and G. I. Rubtsov, *Constraining anisotropic models of the early Universe with WMAP9 data*, Phys. Rev. D **89** (2014) 043517, arXiv:1311.3272 [astro-ph.CO].

[3] G. Rubtsov, S. Ramazanov, F. Urban, M. Thorsrud, *Probing primordial statistical anisotropy with Planck data*, *work in progress*.

I will present the current status of the global neutrino data analysis, pointing out its unique role in constraining the two crucial (still) unknown parameters: the CP-violating phase and the octant. In this context, I will discuss the slight overall preference for in the first octant and for non-zero CP violation with . The (in-)stability of such intriguing indications within different combinations of data sets will be considered in detail. Finally, I will discuss the robustness of the 3-flavor analysis in the presence of new light sterile neutrinos, whose existence is suggested by very-short-baseline experiments and cosmological data.

The Neutrino Oscillation phenomenon has been fully established manifestation of both the non-trivial mixing in the leptonic sector, characterised by the PMNS (3x3 matrix in the 3 neutrino type scenario) and the non-degenerate mass state spectrum of neutrinos. As a consequence, we know that mixing is part of the phenomenology of both quarks and leptons, including CP violation, although their mixing patterns have so far been measured to be very different and the possible physics behind remains unknown. Evidence for neutrino CP violation has not yet been observed, due to lack of experimental sensitivity. Also, thanks to the observation of neutrino oscillations, neutrinos are known to be massive. The absolute mass value is still unknown, being much lower than all other known fermions. In my talk, I will briefly review the experimental status of our knowledge as of the Neutrino 2014 (June 2014 at Boston, USA) conference. Improvements due to foreseen experiments will be also highlighted.

I will review the status of thermal leptogenesis and how this, in combination with low energy neutrino data, can be used as a powerful tool to probe models of new Physics. I will discuss how the possibility to realise independence of the initial conditions (strong thermal leptogenesis) relies on a special right-handed neutrino mass pattern and require quite special conditions, in particular a deviation of light neutrino masses from the hierarchical limit that can be tested in future years with cosmological observations. Intriguingly the set of conditions for strong thermal leptogenesis can be realise within SO(10)-inspired leptogenesis and in this case one obtains a set of definite predictions on future low energy neutrino measurements. Interestingly this solution requires non vanishing reactor mixing angle, now robustly established in reactor neutrino experiments, and a negative neutrino oscillation CP violation invariant parameter, as hinted by recent neutrino oscillation experimental results.

The observation of a neutrinoless double beta decay would be the proof that the neutrino is a Majorana particle, identical to its own anti-particle, and would correspond to a process violating the leptonic number. In this talk, I will give a review of the various experiments which are searching for a neutrinoless double beta decay.

Loop quantum cosmology (a symmetry-reduced quantum model of the Universe inspired by loop quantum gravity) extends the inflationary paradigm to the Planck era: the big bang singularity is replaced by a quantum bounce naturally followed by inflation. Testing for these models requires to compute the amount of cosmological perturbations produced in this quantum background and subsequently derives their footprints on the cosmic microwave background. I propose to review two theoretical approaches treating for cosmological perturbations in a quantum background [1,2], making their respective assumptions and methodology as explicit as possible. I will then show the observational consequences of those treatments focusing on the specific case of the cosmic microwave background anisotropies as a probe of the primordial Universe [3].

*References*

[1] A. Barrau, M. Bojowald, G. Calcagni, J. Grain & M. Kagan, *Anomaly-free cosmological perturbations in effective canonical quantum gravity*, arXiv:1404.1018.

[2] I. Agullo, A. Ashtekar & W. Nelson, *Extension of the quantum theory of cosmological perturbations to the Planck era*, Phys. Rev. D 87 (2013) 043507

[3] A. Barrau, T. Cailleteau, J. Grain & J. Mielczarek, *Observational issues in loop quantum cosmology*, Class. Quant. Grav. 31 (2014) 053001.

The matter bounce scenario is an alternative to inflation where scale-invariant perturbations are generated in a dust-dominated contracting space-time. In the context of loop quantum cosmology, the cosmological singularity is generically resolved and replaced by a bounce; furthermore, it is possible to deterministically evolve the perturbations from the contracting branch through the quantum bounce and calculate the form of the perturbations at the onset of the expanding branch. For the matter bounce scenario in loop quantum cosmology, scale-invariant perturbations and a small tensor-to-scalar ratio are predicted [1], in agreement with the latest observations of Planck and BICEP2 [2].

*References*

[1] E. Wilson-Ewing, *The Matter Bounce Scenario in Loop Quantum Cosmology*, JCAP 1303 (2013) 026, arXiv:1211.6269 [gr-qc].

[2] Y.-F. Cai, J. Quintin, E. N. Saridakis and E. Wilson-Ewing, *Nonsingular bouncing cosmologies in light of BICEP2*, arXiv:1404.4364 [astro-ph.CO].

The CMB map provided by the *Planck* project constrains the value of the ratio of tensor-to-scalar perturbations, namely , to be smaller than (95 % CL). This bound rules out the simplest models of inflation. However, recent data from BICEP2 is in strong tension with this constrain, as it finds a value with disfavored at , which allows these simplest inflationary models to survive. The remarkable fact is that, even though the BICEP2 experiment was conceived to search for evidence of inflation, its experimental data matches correctly theoretical results coming from the matter bounce scenario (the alternative model to the inflationary paradigm). More precisely, most bouncing cosmologies do not pass *Planck's* constrains due to the smallness of the value of the tensor/scalar ratio , but with new BICEP2 data some of them fit well with experimental data. This is the case with the matter bounce scenario in gravity.

This talk is based on [1,2]. Loop quantum cosmology predicts a bounce instead of the big bang. Anisotropies grow in a contracting universe an should therefore not be neglected at the bounce. In this talk I will discuss how the bounce and the following inflation is affected by anisotropies in effective Bianchi-I loop quantum cosmology.

*References*

[1] L. Linsefors and A. Barrau, *Modified Friedmann equation and survey of solutions in effective Bianchi-I loop quantum cosmology*, Class.Quant.Grav. **31** (2014) 015018 [arXiv:1305.4516 [gr-qc]].

[2] L. Linsefors and A. Barrau, *Duration of inflation in effective Bianchi-I loop quantum cosmology*, In preparation

Classification of finite noncommutative spaces points uniquely to the standard model provided the connection is linear. Relaxing the constraint of linearity allows for the Pati-Salam model of unification of quarks and leptons.

Starting with an algebra, we define a semigroup which extends the group of invertible elements in that algebra. As we will explain, this semigroup describes inner perturbations of noncommutative manifolds, and has applications to gauge theories in physics. We will present some elementary examples of the semigroup associated to matrix algebras, and to (smooth) functions on a manifold. Joint work with Ali Chamseddine and Alain Connes.

*References*

[1] A.H. Chamseddine, A. Connes, and W. D. Van Suijlekom. “Inner fluctuations in noncommutative geometry without the first order condition”, *J. Geom. Phys.* 73 (2013) 222–234.

[2] A.H. Chamseddine, A. Connes, and W. D. van Suijlekom. “Beyond the spectral Standard Model: Emergence of Pati-Salam unification”, *JHEP* 1311 (2013) 132.

Noncommutative Geometry à la Connes offers a promising framework for models of fundamental interactions. To guarantee the correct signature, the theory of Lorentzian spectral triples has been developed. I will briefly summarise its main elements and show that it can accommodate a sensible notion of causality understood as a partial order relation on the space of states on an algebra [1]. For almost-commutative algebras of the form , with being finite-dimensional, the space of (pure) states is a simple product of space-time and an internal space. The exploration of causal structures in this context leads to a surprising conclusion [2]: The motion in both space-time and internal space is restricted by a “finite speed of light” constraint. I will explain the latter on 2 simple toy-models.

*References:*

[1] N. Franco, M. Eckstein: *An algebraic formulation of causality for noncommutative geometry*, Classical and Quantum Gravity **30** (2013) 135007.

[2] N. Franco, M. Eckstein: *Exploring the Causal Structures of Almost Commutative Geometries*, SIGMA **10** (2014) 010.

I'll present the status and the achievements of *La Main à La Pâte*, an international model for inquiry-based science education in elementary schools developed in France since 1995, under the impulse of Nobel prize Georges Charpak and Académie des Sciences. The pedagogical approach is based on study of objects of the real world, with science as an inquiry. Emphasis is put on: Questioning, Autonomy, Experimenting, Collective construction of knowledge. An important aspect of our action is the developmentof international partnerships, with for instance websites in six languages. I'll also present the new program *Maisons pour la science* started in France to insure the professional development of teachers in science.

We report on a project to introduce interactive learning strategies (ILS) to physics classes at the Université Pierre et Marie Curie (UPMC), one of the leading science universities in France. In Spring 2012, instructors in two large introductory classes, first-year, second-semester mechanics, and second-year introductory E&M, enrolling approximately 500 and 250 students respectively, introduced ILS into some, but not all of the sections of each class. Pre- and post-instruction assessments (FCI and CSEM respectively) were given, along with a series of demographics questions. Since not all lecture or recitation sections in these classes used ILS, we were able to compare the results of the FCI and CSEM between interactive and non-interactive classes taught simultaneously with the same curriculum. We also analyzed final exam results, as well as the results of student and instructor attitude surveys between classes.

Our results show that ILS are effective at improving student learning by all measures used: research-validated concept inventories and final exam scores, on both conceptual and traditional problem-solving questions. Multiple Linear Regression analysis reveals that interactivity in the classroom is a significant predictor of student learning, showing a similar or stronger relationship with student learning than such ascribed characteristics as parents’ education, and achieved characteristics such as GPA and hours studied per week. Analysis of student and instructors attitudes shows that both groups believe that ILS improve student learning in the physics classroom, and increases student engagement and motivation. All of the instructors who used ILS in this study plan to continue its use.

This work is based on our paper which was published recently in Physical Review Special Topics - Physics Education Research; it was an editor's suggestion when it came out: http://journals.aps.org/prstper/issues/10/1

The physics of the last century is now included in all EU curricula and in the last 10 years appear in all secondary textbooks, even if in not organic way. Although there are very different position as concern its introduction: conceptual knots in classical physics are quoted to argue the exclusion of modern physics in secondary school. Discussions are on goals, rationale, contents, instruments and methods, target students. The different proposals are relative to different goals: the culture of citizens, popularization, guidance, education. As concern what is considered useful to treat are considered: fundament, technologies and applications. Methods are: story telling of the main results, argumentation of crucial problems, integrated or as a complementary part in the curriculum. Modern physics in secondary school is a challenge which involves the possibility to transfer to the future generations a culture in which physics is an integrated part, not a marginal one involves curriculum innovation, teacher education and physics education research in a way that allows the students to manage them in moments of organized analysis, in everyday life, in social decisions. This innovation of curriculum outlines the request for new formative modalities, requires a school in which modern physics is presented as a cultural object that the teacher offers to the students not so much for them to be reproduced, but most of all, for them to be used in a creative way to face the interpretative problems as a competence in many contexts. For this scope it is necessary to undertake a revision of the contents and methods in school activities. Disciplinary knowledge should NOT be seen as static and definite, but in a progressive and continuous evolution, without split the product from its process, with a tight correlation between the many dimensions of knowledge. Disciplines have to become “maps”: conceptual ones for understanding and organizational ones for a guidance in the interpretation of experience. In this perspective modern physics is an integrated content in curricula. Our research focus is the building of formal thinking on 3 directions: 1) Learning processes and role of reasoning in operative hands-on and minds-on to interpret phenomena; 2) object - models as tools to bridge common sense to physics ideas and ICT contribution focusing on real time labs and modelling; 3) building theoretical way of thinking: a path inspired of Dirac approach to quantum mechanics. From our research in physics education some different proposals for the modern physics are: 1) The physics in modern research analysis technics: Resistivity and Hall Effect, Rutherford Backscattering Spectroscopy, Time Resolved Resistivity; 2) Explorative approach to superconductivity (a coherent paths), 3) Discussion of some crucial / transversal concepts both in classical physics and modern physics: state, measure, cross section, 4) Foundation of theoretical thinking: quantum mechanics.

The academic network HOPE - Horizons of Physics Education - has been launched for three years from October 2013 with the support of the Life Long Learning Programme of the European Union. The 71 full partners are from 31 LLP-eligible countries of the Europea; they comprise 65 academic partners and 6 non-academic partners including the European Physical Society. The consortium is further enriched by 20 associated partners including the Institute of Physics, the American Physical Society or GIREP.

With an overall aim of enhancing the impact of physics within Europe and its visibility in society, the network will research and share good practice within four themes: the factors influencing young people to choose to study physics; physics graduates' competences that enable them to contribute to the new needs of the European economy and society; the effectiveness and attractiveness of physics teaching in Europe's university physics departments and their competitiveness in the global student market; strategies for increasing the supply of well-trained physics school teachers and for developing links between university physics departments and the teaching of physics in schools.

The problem of measurement taken at face value shows clearly that there is an inconsistency inside the quantum formalism. The phenomenon of decoherence is often presented as a solution to it. A widely debated question is to decide between two different interpretations. The first one is to consider that the decoherence process has the effect to actually project a superposed state into one of its classically interpretable component, hence doing the same job as the reduction postulate. For the second one, decoherence is only a way to show why no macroscopic superposed state can be observed and so, to explain the classical appearance of the macroscopic world, while the quantum entanglement between the system, the apparatus and the environment never disappears. In this case, explaining why only one single definite outcome is observed remains to do. We will examine arguments for and against both interpretations and will defend a position according to which the outcome that is observed is relative to the observer in close parallel to the Everett interpretation and sharing some similarities with the Rovelli’s relational interpretation, while different on important points.

The last decade has seen a wave of reconstructions and characterizations of quantum theory using the formalism of generalized probability theory. In this talk, we shall outline a novel (operational) approach to characterizing and reconstructing quantum theory which gives primacy to limited information and complementarity rather than the probability structure. In particular, we consider an observer interrogating a system with binary questions and analyze the consequences of (1) a postulate asserting a limit on the (simultaneous) information the observer can acquire about the system, and (2) a postulate asserting the existence of complementarity on the set of possible questions. We explain how the ensuing compatibility and complementarity structure of the binary questions implies many features of qubit quantum theory in an elegant way (e.g. three-dimensionality of the Bloch sphere, the entanglement structure of two qubits, absence of third or higher order interference and many other features). Time permitting, we shall also sketch how this program can be completed to a full reconstruction of quantum theory by adding further ingredients.

Understand the intrinsic physical meaning of quantum formalism leads ontologically to admit that one fundamental side of reality is purely contradictory, irreducibly random, objectively indeterminate and intrinsically independent of relativistic space-time. It means also that physical reality has another fundamental side, i.e. space-time, and that both are irreducible to each other. Quantum reality, discretized by the Planck constant, assume coherent states and partially ordered variables (that are commutative or not), while on the other side, relativistic space-time, structured by the speed of light, assume coherent variables and partially ordered states (that are causally related or not).

From this point of view, fields would be only a derived mixture of these two sides. But quantum reality and relativistic space-time seem to be mixed in two kinds of fields themselves irreducible to each other. If, on one hand, quantum field theory has already formalized the way that quantum reality locates, evolves and interacts on a relativistic background, on the other hand, quantum gravity tends to formalize the way that smooth and causally ordered space-time emerges on a quantum background. But no unification of the two fundamental sides of reality must be waited at this derived level. Unification could perhaps intervene only upstream, in a pure variation preceding and underlying its double distinction in variables and states of variation, i.e. its quantization or its localization and their derived mixtures.

Recently, several discussions on the possible observability of 4-vector fields have been published in literature. Furthermore, several authors recently claimed existence of the helicity=0 fundamental field. We re-examine the theory of antisymmetric tensor fields and 4-vector potentials. We study the massless limits too. In fact, a theoretical motivation for this venture is the old papers of Ogievetski\u{\i} and Polubarinov, Hayashi, and Kalb and Ramond. Ogievetskii and Polubarinov proposed the concept of the *notoph*, whose helicity properties are complementary to those of the *photon*. We analyze the quantum field theory with taking into account mass dimensions of the notoph and the photon. It appears to be possible to describe both photon and notoph degrees of freedom on the basis of the modified Bargmann-Wigner formalism for the symmetric second-rank spinor.
Next, we proceed to derive equations for the symmetric tensor of the second rank on the basis of the Bargmann-Wigner formalism in a straightforward way. The symmetric multispinor of the fourth rank is used. Due to serious problems with the interpretation of the results obtained on using the standard procedure we generalize it and obtain the spin-2 relativistic equations, which are consistent with the general relativity. Thus, in fact we deduced the gravitational field equations from relativistic quantum mechanics. The relations of this theory with the scalar-tensor theories of gravitation and are discussed. Particular attention has been paid to the correct definitions of the energy-momentum tensor and other Nœther currents in the electromagnetic theory, the relativistic theory of gravitation, the general relativity, and their generalizations. We estimate possible interactions, fermion-notoph, graviton-notoph, photon-notoph, and conclude that they will be probably seen in experiments in the next few years.

In the early seventies, Günaydin and Gürsey discovered quark structure in the split octonions, [1]. Using their anti-commuting ladder operators, , we show a direct route to a new generator. This generator behaves like electric charge, thereby allowing us to further identify states behaving like the electron and neutrino.

Our proposed electric charge turns out to be proportional to a number operator, consequently illuminating why it is quantized.

Using only this trio of ladder operators, and their conjugates, we construct a pair of *minimal left ideals*, which is shown to transform under and as does a full generation of the standard model.

*References*

[1] M. Günaydin, F. Gürsey, *Quark Statistics and Octonions*, Phys. Rev. D, Vol. 9, No. 12 (1974)

A new, linear field theory of light and matter is proposed, sharpening the principle of relativity and adding four new coupled differential equations to the original four of Maxwell. The new theory, though continuous, allows only propagating pure field solutions with an integral angular momentum. These are identified with the photon. Solutions with rest-mass are necessarily charged, with a value close to the elementary charge [1]. They have a double-covering rotation in momentum space with a resulting half-integral spin and statistics, as was discussed on the basis of an earlier non-linear theory [2]. The result is that, just as was the case for de Broglie's concept of the harmony of phases [3] which led to his famous relation and the subsequent development of quantum mechanics, it is a rigorous consideration of the principle of relativity that leads to the quantisation of light and matter.

*References*

[1] Williamson, J.G. & van der Mark, M.B. Is the electron a photon with toroidal topology?, *Ann. Fondation L. de Broglie* **22**, 133 (1997).

[2] Williamson, J.G. Fermions from Bosons and the origin of the exclusion principle, Proceedings of MENDEL 2012.

[3] de Broglie, L. Recherches sur la théorie des quanta, *Ann. Phys.* Ser. 10 **3**, 22 (1925).

In this talk we want to show how the mass can enter Yang-Mills theories through central (pseudo) extensions of the corresponding gauge group. This mechanism does not involve extra (Higgs) scalar particles and could provide new clues for the better understanding of the nature of the Symmetry Breaking Mechanism. In this talk we are going to outline the essential points of this approach.

ANTARES is a deep sea neutrino telescope located in the deep sea 40km off the southern coast of France. The results of searches for cosmic neutrinos produced in astrophysical sources and from annhilation of dark matter in the Sun and Galactic centre are presented.

The European neutrino astronomy community has recently started the construction of KM3NeT, a several cubic kilometre neutrino telescope. The R&D for this new research infrastructure are described. Finally, the potential for a measurement of the neutrino mass hierarchy, with a densely instrumented detector configuration (ORCA) is discussed.

Neutrinos of astrophysical origin are messengers produced in stars, in explosive phenomena like core-collapse supernovae, in the accretion disks around black holes, or in the Earth's atmosphere. Their fluxes and spectra encode information both on their sources, the environments they traverse and on key unknown neutrino properties. We will discuss the status and key open questions in our current understanding of neutrino flavour conversion in media and its impact on astrophysical observations. We will describe some of the important recent advances in the field of neutrino astrophysics.

*References*

[1] C. Volpe, *Open issues in neutrino astrophysics*, Annalen Phys. 8-9 (2013) [arXiv:1303.1681].

The last angle of the PMNS mixing matrix has been measured by the neutrino reactor experiments. This important result opens the door to the precision era in the neutrino oscillation landscape. In this context, the next generation of reactor experiments at the kilo ton scale will significantly improve the measurements on the oscillation parameters and will give an answeron the mass hierarchy in the next decades. After a brief summary of the last results, these experiments will be presented with their technological challenges to reach the required sensitivity.

This talk is based on [1]. The HOD model is frequently used in surveys to predict the mass of the dark matter haloes from the clustering of galaxies. On the other hand, semi-analytical models of galaxy formation are often used to populate simulations according to some physical prescriptions and merger trees. We use the Millennium Simulation with their semi-analytical models of galaxy formation to reconstruct the galaxy bias from the halo bias assuming the HOD model in order to study the compatibility of the HOD model in these galaxy catalogues. We find that assembly bias affects the reconstructions for low masses.

[1] A. Pujol and E. Gaztañaga, *Are the halo occupation predictions consistent with large scale galaxy clustering?*, arXiv:1306.5761, MNRAS accepted.

If parity invariance is broken in the primordial universe, the cosmic microwave background and cross-correlations, usually vanishing, become non zero. Their detection would then constrain the level of parity violation, . I propose to present forecasts on the detection of this parameter by realistically estimating the uncertainties on the and spectra via the *pure* pseudo spectrum method, which efficiency has been shown in [1]. I will present the results of this forecast [2] in the case of two typical experimental setups: a small-scale experiment and a large scale survey. Our results show that no constraints can be put on in the former case. However a range of model would be accessible with a future CMB satellite-like mission: for instance, a parity violation of at least with could be detected.

*References*

[1] A. Ferté, J. Grain, R. Stompor, M. Tristram, *Efficiency of pseudo spectrum methods for estimation of the cosmic microwave background -mode power spectrum*, 2013, Phys. Rev. D 88, 023524 (2013)

[2] A. Ferté and J. Grain, *Detecting chiral gravity with the pure pseudospectrum reconstruction of the cosmic microwave background polarized anisotropies*, 2014, to be published in Phys. Rev. D. (http://arxiv.org/abs/1404.6660)

Almost all of the cosmologists accept nowadays that the redshift of the galaxies is due to the expansion of the Universe (cosmological redshift), plus some Doppler effect of the peculiar motions, but can we be sure of this fact by means of some of another independent cosmological test? In this talk, I will review some of the recent applications of this search: angular size test [1], Tolman or surface brightness test [2], in which we need to assume a very strong evolution of the galaxy sizes to fit the data with the standard cosmology; or Alcock-Paczynski test [3], an evaluation of the ratio of observed angular size to radial/redshift size, which is independent of the evolution of galaxies but it is affected by the redshift distortions produced by the peculiar velocities of the gravitational infall. In this last test, it results that the standard cosmology is compatible with the data but also with some models of static Universe.

*References*

[1] M. Lopez-Corredoira, *Angular-size test on the expansion of the Universe*, Int. J. Mod. Phys. D 19 (2010) 245.

[2] E. J. Lerner, R. Falomo and R. Scarpa,
*UV surface brightness of galaxies from the local Universe to *, Int. J. Mod. Phys. D 23 (2014) id. 1450058.

[3] M. Lopez-Corredoira, *Alcock-Paczynski cosmological test*, Astrophys. J. 781 (2014) 96.

This talk is based on [1]. An ultralight free scalar field with mass around eV is a viable dark mater candidate, which can help to resolve some of the issues of the cold dark matter on sub-galactic scales. We consider the gravitational field of the galactic halo composed out of such dark matter. The scalar field has oscillating in time pressure, which induces oscillations of gravitational potential with amplitude of the order of and frequency in the nanohertz range. We find that for a range of dark matter masses, the scalar field dark matter signal is comparable to the expected stochastic gravitational wave signal and can be detected by the planned SKA pulsar timing array experiment.

*References*

[1] A. Khmelnitsky and V. Rubakov, “Pulsar timing signal from ultralight scalar dark matter,” JCAP **1402** (2014) 019 [arXiv:1309.5888 [astro-ph.CO]].

ANTARES is a deep sea neutrino telescope located in the deep sea 40km off the southern coast of France. The results of searches for cosmic neutrinos produced in astrophysical sources and from annhilation of dark matter in the Sun and Galactic centre are presented.

The European neutrino astronomy community has recently started the construction of KM3NeT, a several cubic kilometre neutrino telescope. The R&D for this new research infrastructure are described. Finally, the potential for a measurement of the neutrino mass hierarchy, with a densely instrumented detector configuration (ORCA) is discussed.

Neutrinos of astrophysical origin are messengers produced in stars, in explosive phenomena like core-collapse supernovae, in the accretion disks around black holes, or in the Earth's atmosphere. Their fluxes and spectra encode information both on their sources, the environments they traverse and on key unknown neutrino properties. We will discuss the status and key open questions in our current understanding of neutrino flavour conversion in media and its impact on astrophysical observations. We will describe some of the important recent advances in the field of neutrino astrophysics.

*References*

[1] C. Volpe, *Open issues in neutrino astrophysics*, Annalen Phys. 8-9 (2013) [arXiv:1303.1681].

The last angle of the PMNS mixing matrix has been measured by the neutrino reactor experiments. This important result opens the door to the precision era in the neutrino oscillation landscape. In this context, the next generation of reactor experiments at the kilo ton scale will significantly improve the measurements on the oscillation parameters and will give an answeron the mass hierarchy in the next decades. After a brief summary of the last results, these experiments will be presented with their technological challenges to reach the required sensitivity.

Black holes are probably among the most fascinating objects populating our universe. Their characteristic features, encompassing spacetime singularities, event horizons, and black hole thermodynamics, provide a rich testing ground for quantum gravity ideas. In this talk, we review the status of black holes within Weinberg's asymptotic safety program [1].

The resulting quantum improved Schwarzschild black hole is discussed in detail and the effective geometry will be compared to recent findings within Loop Quantum Gravity [2]. Moreover the quantum singularity emerging for the inclusion of a cosmological constant [3] is elucidated and linked to the phenomenon of a dynamical dimensional reduction of spacetime.

*References*

[1] B. Koch and F. Saueressig, *Black holes within Asymptotic Safety*, Int J Mod Phys A **29** (2014) 8, 1430011 arXiv:1401.4452.

[2] C. Rovelli and F. Vidotto, arXiv:1401.6562.

[3] B. Koch and F. Saueressig, *Structural aspects of asymptotically safe black holes*, Class Quant Grav **31** (2014) 015006, arXiv:1306.1546.

We analyze the relationship between entanglement (or geometric) entropy with statistical mechanical entropy of horizon degrees of freedom when described in the framework of isolated horizons in loop quantum gravity. We show that, once the relevant degrees of freedom are identified, the two notions coincide. The key ingredient linking the two notions is the structure of quantum geometry at Planck scale implied by loop quantum gravity, where correlations between the inside and outside of the black hole are mediated by eigenstates of the horizon area operator.

Recovering the Bekenstein-Hawking formula for black hole entropy is a crucial test for any candidate to a quantum theory of gravity. The approach followed in Loop Quantum Gravity leads successfully to the famous formula, up to a choice for the real Barbero-Immirzi parameter, plus logarithmic corrections in agreement with other approachs. However, it has been recently shown in [1] that proceeding to an analytic continuation of the dimension of the Hilbert space of the Chern Simons theory (related to the degrees of freedom of the horizon) leads directly to the exact Bekenstein Hawking formula (without recovering the logarithmic corrections). Therefore, working with the self dual Ashtekar connection seems to give directly the right result. Such a result underlies the status of the complex Ashtekar variables as the good connection to use in Loop Quantum Gravity, additional works pointing in the same direction [2] [3] [4] [5] [6]. Yet, a rigorous construction of this analytical continuation was still missing. This is precisely what was done in a recent article and this is the sub ject of this talk. I will first focus on the construction of the analytical continuation. Then, I will detail the main result, that is the Bekenstein-Hawking formula and its logarithmic corrections in the context of Self Dual Loop Quantum Gravity.

*References*

[1] E. Frodden, M. Geiller, K. Noui and A. Perez, *Black hole entropy from complex Ashtekar variables*, (2012), arXiv:1212.4060 [gr-qc].

[2] J. Samuel, *Is Barbero’s Hamiltonian formulation a gauge theory of Lorentzian gravity?*, Class. Quant. Grav. 17 L141 (2000), arXiv:gr-qc/0005095.

[3] S. Alexandrov, *On choice of connection in loop quantum gravity*, Phys. Rev. D 65 024011 (2002), arXiv:gr-qc/0107071.

During this talk, we shall discuss the passive influence of quantum matter fluctuations on the expansion parameter of a congruence of timelike geodesics in a semiclassical regime. In particular, we shall see that the perturbations of this parameter can be considered to be elements of the algebra of matter fields at all perturbative order. Hence, once a quantum state for matter is chosen, it is possible to explicitly evaluate the amplitude of the geometric fluctuations. After introducing the formalism necessary to treat similar problems, we estimate the approximated probability of having a geodesic collapse in a flat spacetime due to those fluctuations. Starting from this, some estimate of the spacetime uncertainty relations will be given.

*References*

[1] N. Drago, N. Pinamonti *“Influence of quantum matter fluctuations on geodesic deviation”*, Preprint February (2014) [1402.4265 [math-ph].

[2] N. Pinamonti, D. Siemssen *“Global Existence of Solutions of the Semiclassical Einstein Equation in Cosmological Spacetime”*, Commun. Math. Phys. Accepted for publication. (2014) [1309.6303 [math-ph].

[3] N. Pinamonti, *“On the initial conditions and solutions of the semiclassical Einstein equations in a cosmological scenario”*, Commun. Math. Phys. **305** (2011) 563-604.

Alain Connes' noncommutative geometry allows to unify the classical Yang-Mills-Higgs theory and General relativity in a single geometrical framework, so called almost-commutative geometries. This unification implies restrictions for the couplings of the Standard Model at a given cut-off energy which reduce the degrees of freedom compared to the classical Standard Model.

I will give an introduction to the basic ideas of almost-commutative model building and present models beyond the Standard Model that may be phenomenologically interesting. These models include extensions of the fermionic and the gauge sector as well as extensions of the scalar sector.

A new type of general physics experimental lab in a college, learning physics principles via designing and building a device has been attempted at POSTECH in Korea, Since 2010. Traditional general physics experiment course, which is almost standardized all over the world, is based on the concept to confirm and verify in the real world the principles taught in the class. The design and build physics lab (DBL) normally provided in the second semester emphasizes students to learn and realize themselves how physics principles operate in a real product by working on a project as a team, in agreement with the sprit of the STEAM educational system. In this paper, our experience on this new teaching approach in Korea in the past few years will be presented.

Is it possible to improve our teaching without becoming experts in didactics ? At the physics group of the Université de Savoie, we experimented a few techniques and options, with the help of experts in pedagogy. Among others, we will describe the use of the online teaching platform WIMS [1] that allows random exercises, has a large choice of interaction methods and a formal engine; the use of videos of physics courses to be viewed online; experiments in tutorial classes; the use of audience response devices (clickers), in connection to active learning. We are also experimenting some efficiency measuring tools (concept inventories) [2]. Finally, we will start in the near future experimenting a flipped classroom.

*References*

[1] One of the WIMS servers : http://wims.unice.fr

[2] D. Hestenes, M. Wells, G. Swackhamer, Force Concept Inventory, The Physics Teacher, Vol. 30, March 1992, 141-158

The research in quantum gravity has jauntily grown in the recent years, intersecting with conceptual and philosophical issues that have a long history. In this paper I analyze the conceptual basis on which Loop Quantum Gravity has grown, the way it deals with some classical problems of philosophy of science and the main methodological and philosophical assumptions on which it is based. In particular, I emphasize the importance that atomism (in the broadest sense) and relationalism have had in the construction of the theory.

*References*

[1] F. Vidotto, *Atomism and Relationalism as guiding principles for Quantum Gravity*, [physics.hist-ph/1309.1403].

[2] C. Rovelli and F. Vidotto, *Covariant Loop Quantum Gravity. An elementary introduction to Quantum Gravity and Spinfoam Theory*, Cambridge University Press, Cambridge U.K. (2014).

From ontological point of view, single particle QM makes a lot more sense than multiparticle case. After all, in case of a single particle you can claim that is NOT “probability amplitude” but, rather, it is simply “classical field”; it simply HAPPENS that probability of collapse of coincides with the square of its absolute value. On the other hand, in case of many particles you can not make this argument: after all, classical field lives in ordinary space, NOT in configuration space! Motivated by this, I am attempting to reduce multiparticle quantum mechanics to single particle one. I am doing this by the following trick. First, I introduce extra dimension. Then I design various “classical” objects in such a way that we have different “classical” configurations on different hyperplane. Finally, we introduce one SINGLE quantum particle, the one referred to as “pointer”, and the “pointer” takes the classical configuration from the hyperplane it resides on and makes it “come true”. Thus, instead of quantizing configurations, all we have to do is quantize the location of a pointer, which reduces multiparticle QM to single particle QM. This approach is also extended to reducing quantum field theory to single particle quantum mechanics, as well; we simply replace “classical particle distribution” on the hyperplanes with “field distributions”. The reference to the paper where I have done the above work is [1].

*References*

[1] arXiv:1309.3287

I give a general overview of the developments in Loop Quantum Gravity and I describe a recent idea for a possible novel window of observation of quantum gravitational phenomena: Planck stars.

The measurement of a small deviation of the primordial spectrum of scalar (density) perturbations in the Universe from the exactly flat (Harrison-Zeldovich, ) one in the WMAP and Planck CMB experiments confirms the general prediction of the early Universe scenario with the de Sitter (inflationary) stage preceding the radiation dominated stage (the hot Big Bang) and strongly restricts the class of viable inflationary models [1].Thus, the status of the inflationary paradigm is changing from “proving” it in general and testing some of its simplest models to applying it for investigation of the actual history of the Universe in the remote past and particle physics at super-high energies using actual observational data. The announced discovery of primordial gravitational wave background through the measurement of the B-mode of the CMB linear polarization in the range of multipoles in the BISEP2 experiment [2] confirms another general prediction [3] of this scenario, as well as produces the direct evidence for the existence of a very strongly curved space-time with in the past of our Universe and the necessity of quantization of gravitational waves. Still the BISEP2 result is partially contaminated by foregrounds (mainly by polarized galactic dust emission) and requires confirmation of its blackbody character. Moreover, comparison of BISEP2 data with the temperature and E-mode polarization data earlier obtained in the WMAP and Planck experiments shows that the inflationary stage is not so simple and may not be described by a one-parametric model. In particular, the primordial spectrum of scalar perturbations generated during inflation is not of a power-law form [4], mainly due to the depression of the angular anisotropy power spectrum in the multipole range . A class of models describing this feature which implies existence of some scale (i.e. new physics) during inflation is proposed [5]. Furthermore, account of additional wiggles in the spectrum at and requires further complication of the inflaton potential [6] by introducing sharp features of the type suggested by previous studies [7]. While viable inflationary models with a smooth potential require the inflaton mass GeV, it has to increase up to GeV and may be larger near the feature. Thus, combination of CMB temperature anisotropy and polarization data helps to make a “tomographic” study of inflation and particle physics in this range of energies.

*References*

[1] P. A. R. Ade et al. [Planck Collaboration], arXiv:1303.5082.

[2] P. A. R. Ade et al. [BISEP2 Collaboration], arxiv:1403.3985.

[3] A. A. Starobinsky, JETP Lett. 30, 682 (1979).

[4] D. K. Hazra, A. Shafieloo, G. F. Smoot and A. A. Starobinsky. JCAP 1406, 061 (2014), arXiv:1403.7786.

[5] D. K. Hazra, A. Shafieloo, G. F. Smoot and A. A. Starobinsky, arXiv:1404.0360.

[6] D. K. Hazra, A. Shafieloo, G. F. Smoot and A. A. Starobinsky, arXiv:1405.2012.

[7] A. A. Starobinsky, JETP Lett. 55, 489 (1992).

Brief overview of the current status and prospects of cosmic ray studies is presented. Our Galaxy and extragalactic space are filled with cosmic rays, a relativistic gas of high-energy protons, electrons, and heavy nuclei. The directly measured cosmic ray energy spectrum extends from about MeV to energies above eV. The radio-astronomical, X-ray, gamma-ray and the first very high energy neutrino observations shed light on the origin of cosmic rays. The model of cosmic ray origin in supernova remnants (including pulsars), the interpretation of Voyager data on low energy particles, the structure of knee in cosmic ray spectrum at eV, and the energy limit of Galactic sources are discussed. The origin of cosmic rays with energies above to eV may be associated with the Active Galactic Nuclei, the progenitors of Gamma-Ray Bursts, the fast spinning newborn pulsars, the large-scale structure formation shocks and some other objects.

Cosmic inflation in the very early universe provides a framework in which to understand the seeds of large-scale structure in our Universe. A rapid, accelerated expansion at ultra-high energies can stretch quantum vacuum fluctuations up to extra-galactic scales. I will discuss the impact of recent observations of the cosmic microwave background sky which provide evidence of primordial density perturbations and now, for the first time, possible evidence for primordial gravitational waves as predicted by inflation. I will discuss how inflation compares with alternative models for the origin of structure and how we might further test the physics of inflation through cosmological observations.

*References*

[1] M. Anselmino, A. Efremov and E. Leader,*The theory and phenomenology of polarized deep inelastic scattering*, ys. Rept. 261 (1995) 1 [Erratum ibid 281 (1997) 399] [hep-ph/9501369].

[2] R. Penrose and W. Rindler, Spinors and Space-time, Vol. 2: Spinor and twistormethods in space-time geometry, Cambridge University Press, Cambridge U.K.(1986), pg. 501.

I will review both the general problem of the search for non-Gaussian signatures in cosmological perturbations, originated from inflation in the early Universe. I will discuss this issue both from the theoretical point of view and in connection with constraints coming from recent observations and future prospects for observing/constraining them.

The inflationary scenario is currently considered to be the most promising paradigm to describe the origin of the perturbations in the early universe. It corresponds to a period of accelerated expansion before the hot Big Bang phase. Inflation is typically achieved using scalar fields, and it is the quantum fluctuations associated with the scalar fields that are responsible for the creation of the primordial perturbations. The perturbations generated during inflation leave their signatures as anisotropies in the Cosmic Microwave Background (CMB). With the CMB anisotropies being measured to greater and greater precision, we are presently in an unprecedented situation of being able to arrive at strong constraints on the physics of the early universe. In this talk, after a brief introduction to inflation, I describe the implications of the recently released Planck & BICEP2 data for inflation and discuss what are the “best” inflationary scenarios.

*References*

[1] J. Martin, C. Ringeval and V. Vennin, *Encylopedia Inflationaris*, to appear in Journal of the dark universe [arXiv:1303.3787].

[2] J. Martin, C. Ringeval, R. Trotta and V. Vennin, *The Best Inflationary Models after Planck*, JCAP1403, 039, 2014, [arXiv:1312.3529].

[3] J. Martin, C. Ringeval and V. Vennin, *K-inflationary Power Spectra at Second Order*, JCAP1306, 021, 2013, [arXiv:1303.2120].

Lattice QCD is a method for solving the nonperturbative dynamics of low energy QCD from first principles. Over the last few years, the field has matured considerably and reliable experimental predictions have been obtained in many areas. I will summarise the most important developments, give an overview of the currently attainable precision on key observables such as the hadron spectrum or the light quark masses and discuss some open challenges and future perspectives of the field.

Astrophysical sources are extremely efficient accelerators. Some sources emit photons up to multi-TeV energies, a signature of the presence, within them, of particles with energies much higher than those achievable with the largest accelerators on Earth. Even more compelling evidence comes from the study of Cosmic Rays, charged relativistic particles that reach the Earth with incredibly high energies: at the highest energy end of their spectrum, these subatomic particles are carrying a macroscopic energy, up to a few Joules.

Here I will address the best candidate sources and mechanisms as cosmic particle accelerators. I will mainly focus on Galactic sources such as Supernova Remnants and Pulsar Wind Nebulae, which being close and bright, are the best studied and understood among astrophysical accelerators. These sources are probably responsible only for particle acceleration up to PeV energies, and hence for most of the energy that is put into relativistic particles in the Universe, but not for the highest individual particle energies. However they allow us to study in great detail acceleration mechanisms such as shock acceleration (both in the newtonian and relativistic regime) or magnetic reconnection, the same processes that are likely to be operating also in more powerful sources.

I will present a summary of what we learned so far from low-energy flavor observables, concerning on physics beyond the Standard Model (SM). In the past few years there has been a great experimental progress in quark and lepton flavour physics. In the quark sector, the validity of the SM has been strongly reinforced by a series of challenging tests. As I try to show, looking for physics beyond the SM via the Flavour Window is still a powerful tool thanks also to forthcoming results from LHC and future B Factories.

The physical meaning of the diffeomorphisms in the general relativistic theories will be discussed, the issues of time evolution in terms of the Dirac observables and physical Hamiltonian will be addressed. New proposals for geometric deparametrization will be presented. The quantum part of the lecture will concern the canonical LQG. New, improved formulations of the quantum Hamiltonian will be proposed. New applications for LQG will be offered. Original results that will be presented in this lecture were obtained in collaboration with: Dapor, Duch, Kaminski, Swiezewski, Alesci, Assanioussi, Dziendzikowski and Sahlmann.

This talk is based on [1,2,3]. Until less than 10 years ago, post-Newtonian (pN) analysis was the only possible systematic method for obtaining gravitational waveforms corresponding to binary inspiral. However, these were cut-off before the merger, until the recent availability of direct results from numerical relativity computations, which could include the complete merger and ring-down phase of the orbital evolution. Unfortunately these calculations are not yet of sufficient precision to strenuously test pN methods intrinsically. By contrast, the gravitational self-force approach has become capable of advancing to extremely high precision, and of thereby testing most of the various techniques used in pN calculations. Although restricted to the extreme-mass-ratio limit, self-force calculations are now able to verify both the methods and results of pN work, and even of extending it. In fact, as will be demonstrated, they now have high enough precision to be able to determine new coefficients analytically.

*References*

[1] Abhay G Shah, John L Friedman and Bernard F Whiting, *Finding high-order analytic post-Newtonian parameters from a high-precision numerical self-force calculation*, 7pp,
published in Phys.Rev. D **89** 064042 (March 18, 2014), arXiv:1312.1952 [gr-qc]

[2] Luc Blanchet, Guillaume Faye and Bernard F. Whiting, *Half-integral conservative post-Newtonian approximations in the redshift factor of black hole binaries*,
12 pp, published in Phys.Rev. D **89** 064026 (March 11, 2014), arXiv:1312.2975 [gr-qc]

[3] Luc Blanchet, Guillaume Faye and Bernard F. Whiting, *High-order half-integral conservative post-Newtonian coefficients in the redshift factor of black hole binaries*, 33 pp, submitted to Phys.Rev. D (May 20, 2014), arXiv:1405.5151 [gr-qc].

We review recent observational results on the gamma-ray emission from galactic sources in which efficient particle acceleration can be seen. In particular, we highlight recent progress on our understanding of particle acceleration in supernova remnants, pulsar wind nebulae, and gamma-ray binaries, based on the results from the Fermi Gamma-ray Space Telescope and ground-based Cherenkov telescopes such as H.E.S.S.

This review talk addresses the question of the Cosmic-Ray (CR) propagation in the interstellar medium (ISM). We will discuss some recent progresses made in the microphysical description of the CR interaction electromagnetic fluctuations in the ISM using both analytical and numerical approaches. We will discuss these models in the context of the CR phenomenological models.

I will review the recent results on the high-energy phenomena occuring in the region of the Galactic Center by first describing the structure and the components of this fascinating area of the sky and then reporting the main discoveries that have been obtained in X-rays and gamma-rays. Several of these results indicate that the Galactic Center was the site of a powerful activity in the past, possibly driven by the central supermassive black hole, Sgr A*, that, today, appears very dim and quiet in contrast with the exuberant activity displayed by other galactic nuclei. I will try to put these results in contest with the general picture we have from the observations of the nuclei of external galaxies to see if this can lead to some new insights on the impact of the galactic supermassive black hole on its close environment and on the whole Galaxy.

Pulsar Wind Nebulae are the astrophysical sources that host the most relativistic shocks in Nature and the only Galactic sources in which we have direct evidence of PeV particles. These facts make them an ideal laboratory for High Energy Astrophysics and the ideal place where fundamental processes common to different classes of relativistic sources can be understood.

I will review the current status of our theoretical understanding of Pulsar Wind Nebulae. In recent years, axisymmetric models of pulsar winds with a latitude dependent energy flux have proved very successful at explaining the high energy morphology of PWNe. This success has prompted developments aimed at using multi-wavelength observations of these nebulae as a diagnostics of the hidden physics of the pulsar wind and of the mechanism(s) through which particles are accelerated at the highly relativistic shock that terminates the wind.

I will discuss these most recent developments in terms of the information that we infer from detailed comparison of simulated synchrotron and inverse Compton emission with current observations.

Inflation is the leading candidate for the earliest epoch of our universe. It consists of an early phase of de sitter like expansion that flattened the universe and produced the primordial fluctuations. How in detail did inflation happen? What is the Physics that governed that epoch? and, on a different topic, can we learn something on how inflation started? I will review our current knowledge of inflation and how we try to attack these questions through several observables in the CMB and large scale structures.

This talk is based on our works [2, 3] and our review article [1] about loop corrections of the primordial perturbations generated during inflation. A naive perturbation theory predicts that loop corrections generated during inflation suffer from various infrared (IR) divergences. We discuss the origin of the IR divergences and explore the regularity conditions, which will restrict the possible initial states of inflationary universe.

*References*

[1] T. Tanaka and Y. Urakawa, Class. Quant. Grav. **30**, 233001 (2013) [arXiv:1306.4461 [hep-th]].

[2] T. Tanaka and Y. Urakawa, PTEP **2013**, no. 6, 063E02 (2013) [arXiv:1301.3088 [hep-th]].

[3] T. Tanaka and Y. Urakawa, arXiv:1402.2076 [hep-th], accepted in PTEP.

We briefly introduce the inflation model driven by a massive scalar field in Eddington-inspired Born-Infeld gravity [1] investigated in [2]. We present the recent results of the tensor perturbation in this model investigated in [3]. For short wave-length modes, the perturbation feature is very similar to that of the usual chaotic inflation. For long wave-length modes, the perturbation exhibits a peculiar rise in the power spectrum which may leave a signature in the cosmic microwave background radiation. We also briefly discuss the scalar perturbation in this model.

*References*

[1] M. Banados and P. G. Ferreira, *Eddington's theory of gravity and its progeny*, Phys. Rev. Lett. **105**, 011101 (2010) [arXiv:1006.1769 [astro-ph.CO]].

[2] I. Cho, H.-C. Kim and T. Moon, *Precursor of Inflation*, Phys. Rev. Lett 111, **071301** (2013) [arXiv:1305.2020 [gr-qc]].

[3] I. Cho and H.-C. Kim, *Inflationary Tensor Perturbation in Eddington-inspired Born-Infeld gravity*, arXiv:1404.6081 [gr-qc].

Large scale magnetic fields are ubiquitous in the universe: they are observed in galaxies and clusters, even at high redshift, and recently a lower bound on the magnetic field amplitude in the inter-galactic medium has been established using gamma ray telescopes. The origin of cosmological magnetic fields is still unknown, but their presence in the voids among structures points towards a generation in the very early universe. However, there is currently consensus about the absence of a satisfactory model, based on a gauge invariant lagrangian, able to produce during inflation the magnetic fields present in voids. After reviewing the considerations that lead to such conclusion, I will discuss how it is possible to evade them.

A nonsingular bouncing cosmology can be achieved by introducing a fermion field with a condensation occurring at high energy scales. The gap energy density restored in the regular state of a cosmic fermion can yield a short period of ekpyrotic phase in a contracting universe. The unwanted primordial anisotropies can be diluted during the ekpyrotic contraction. To derive a nearly scale-invariant CMB spectrum, another matter field is required. We deepen one possible curvaton mechanism by involving one another fermion field without condensation and of which the mass is lighter than the background field. By virtue of the fermion curvaton mechanism, this model is consistent with the latest cosmological data.

We will discuss the implications from the measured mass and production/decay rates of the observed Higgs boson on several well motivated extensions of the Standard Model.

Firstly, we will show that in the Minimal Supersymmetric Standard Model, to a good approximation, the phenomenology of the lighter Higgs state can be described by its mass and three couplings: those to massive gauge bosons and to top and bottom quarks. We will then demonstrate that the value ~GeV together with the non-observation of superparticles at the LHC, indicates that the SUSY-breaking scale is rather high, ~TeV, leading to a Higgs sector that can be described, to a good approximation, by only two free parameters.

Secondly, we will discuss how the LHC limits on the invisible Higgs branching fraction impose strong constraints on Higgs portal models of dark matter.

With the discovery in 2012 of a Higgs boson at the LHC, the focus of the Higgs physics campaigns at the ATLAS and CMS experiments has transitioned from the search for the Higgs to the pursuit of the complete characterization of this new particle. This is an essential step that is necessary in the determination of whether this is the Higgs boson of the standard model, or a component of some other, more exotic explanation of the fundamental world. Principal among these characterizations is the understanding of the interaction strength, or coupling, between this Higgs boson and the other known fundamental particles; among these couplings, the largest and arguably the most important is that between the Higgs and the top quark, the most massive of the known fundamental particles. The only direct probe of this coupling is through the observation of Higgs boson production in association with top quarks at the LHC. In this talk, the search strategy and current status will be described, focusing on the results from the 2010-2012 Run 1 of the LHC, with also an eye toward future studies in Run 2. The observation of this process is one of the highest-priority aspects of the current LHC physics program.

Since the discovery of the Higgs-like boson at the LHC by the two experiments ATLAS and CMS, properties of the new boson are now being measured with increasing precision, restricting the phase space for Beyond the Standard Model physics. In this talk, the re-interpretation of these measurements by ATLAS and CMS in BSM scenarios like e.g. the Two-Higgs-Doublet Model are reviewed.

A substantial far-off-shell contribution to the Higgs boson signal in the decay mode [1] facilitates a novel, complementary approach to constraining the Higgs width at the Large Hadron Collider (LHC) [2]. A first experimental analysis [3] obtained MeV (95% CL), which is times the expected value in the Standard Model at the measured Higgs boson mass. I review the theoretical basis of the new approach and discuss its significance and potential in comparison to other methods to bound and measure the Higgs width at the LHC and future colliders.

*References*

[1] N. Kauer and G. Passarino, *Inadequacy of zero-width approximation for a light Higgs boson signal*, JHEP **1208** (2012) 116 [arXiv:1206.4803 [hep-ph]].

[2] F. Caola and K. Melnikov, *Constraining the Higgs boson width with ZZ production at the LHC*, Phys. Rev. D **88** (2013) 054024 [arXiv:1307.4935 [hep-ph]];
J. M. Campbell, R. K. Ellis and C. Williams, *Bounding the Higgs width at the LHC using full analytic results for *, JHEP **1404** (2014) 060
[arXiv:1311.3589 [hep-ph]].

[3] V. Khachatryan *et al.* [CMS Collaboration], *Constraints on the Higgs boson width from off-shell production and decay to -boson pairs*, CERN-PH-EP-2014-078, arXiv:1405.3455 [hep-ex].

We give a brief introduction to the group field theory (GFT) formalism for quantum gravity, a 2nd quantised reformulation of loop quantum gravity and spin foam models and a group-theoretic enrichment of the purely combinatorial tensor models. We then review some recent key developments concerning: the definition and analysis of 4d gravity models, progress in GFT renormalisation, and the extraction of effective cosmological dynamics from GFT condensates.

A criticism on group field theories (GFT) as candidate definitions of the dynamics of Loop quantum gravity (LQG) has been the restriction of its boundary states defined on graphs to fixed valency. In this contribution we will review the classes of spacetime complexes common in LQG and give a definitions in terms of combinatorial complexes. We will then present a class of GFTs generating all of these, including the so called KKL spin-foam models.

In recent years, spin foam models have been proposed to define a path integral for quantum gravity in a non-perturbative and background-independent way.

In this talk I will address the questions of continuum limit and diffeomorphism symmetry in these models, and relate them to the notion of Wilsonian renormalization group flow. A proposal for a background-independent formulation of renormalization for spin foam models will be presented, which does not require a notion of length scale. The concept of cylindrical consistency plays a crucial role in this context, and we show how it naturally defines the RG equations, providing a continuum limit for the theory.

We develop a technique for describing quantum states of the gravitational field in terms of coarse grained spin networks, following earlier insights by Livine and Terno [1], [2]. This technique shows that the number of nodes and links in a quantum state of gravity, and the spin associated to the links, depend on the observables chosen for the description of the state, and therefore the question of the title is ill posed, unless further information about what is been precisely measured is given.

*References*

[1] S.Ariwahjoedi, J. S. Kosasih, C. Rovelli, F. P. Zen, *How many quanta are there in a quantum spacetime?* [gr-qc/1404.1750]

[2] E. R. Livine and D. R. Terno, *Bulk entropy in loop quantum gravity*, Nuclear Physics B 794 (May, 2008) 138-153, [gr-qc/0706.0985].

[3] E. R. Livine and D. R. Terno,*Reconstructing Quantum Geometry from Quantum Information: Area Renormalisation, Coarse-Graining and Entanglement on Spin Networks*, [gr-qc/0603008].

[4] D. Colosi and C. Rovelli,*What is a particle?*, Class. Quant. Grav. 26 (2009) 25002, [gr-qc/0409054].

I will present some recent developments in the theory of non-formal Drinfel's twists. A Drinfel'd twist consists in a tool that allows to deform in an associative way any associative algebra that possesses a given symmetry. For instance the Moyal twist is a Drinfel'd twist for the abelian symmetry . I will report a general method for constructing Drinfel'd twists based on generally non-abelian symmetries. I will conclude by mentioning applications in various domains.

In this talk, I will survey some of the results obtained in [3, 4, 5]. On a flat (pseudo-)Riemannian manifold, the higher symmetries of the Laplacian form an associative algebra of differential operators, determined by Eastwood in [1]. It plays a central role in higher spin field theory and give a geometric realization of a highly non-trivial object from Lie theory: the Joseph ideal [2]. Using quantization methods, I propose a simple proof of Eastwood's result and extend it to the system Laplace Dirac operators. In dimension , its higher symmetries are generated by the conformal supersymmetries discovered by Wess and Zumino [6].

*References*

[1] M. G. Eastwood. *Higher symmetries of the Laplacian.* Ann. of Math. (2), 161(3):1645–1665, 2005.

[2] A. Joseph. *The minimal orbit in a simple Lie algebra and its associated maximal ideal.* Ann. Sci. École Norm. Sup. (4), 9(1):1–29, 1976.

[3] J.-Ph. Michel. *Higher symmetries of Laplacian via quantization.* Ann. Inst. Fourier (to appear).

[4] J.-Ph. Michel, F. Radoux, and J. Šilhan. *Second order symmetries of the conformal {L*aplacian.} SIGMA, 10:Paper 016, 2014.

[5] J.-Ph. Michel and J. Šilhan. *Higher symmetries of Laplace and Dirac operators.* In preparation.

[6] J. Wess and B. Zumino. *Supergauge transformations in four dimensions.* Nucl. Phys. B, 70(1):39 – 50, 1974.

I shall discuss the general framework of symplectic diffeology through examples, in infinite dimension and in singular context: infinite projective space, orbifolds, symplectic irrational tori etc. I will discuss how the condition to be symplectic in diffeology involves at the same time the universal moment map, relative to the automorphisms of the structure, and the decomposition of the space into orbits under the automorphisms. I will also discuss a family of examples that mixes infinite dimension and singular quotients: the infinite quasi-projective spaces.

That will show how diffeology is an operable theory and handles the objects of mathematical physics in a simple but rigorous way.

There is a general agreement that undergraduate physics education presents a global challenge leading to many pedagogical innovations designed to enhance student's understanding of physical concepts. In France, current orientations focus on the development of inquiry based approaches with a model inherited from reforms in primary and secondary science education. Such an approach is promoted for its interactive nature placing students in a research-based environment. It is based on the view that designing the learning process should take into account the knowledge previously acquired by students ensuring that students fully understand the concepts involved and can apply their understanding to new situations. The purpose of that communication is to present long established results in physics education research and stress their impact on teaching practices. Using examples of scientific concepts known for presenting an obstacle to understanding, the communication aims at demonstrating that recurring ideas can be found amongst undergraduate students. These ideas might present a limited validity according to the scientific concepts. They are persistent and can be found amongst younger pupils, graduates, or teachers. They usually present a logic in a historical and cultural context. Many research studies indicate that the information presented through traditional science instruction is often incompatible with student's knowledge and beliefs and consequently, students persist on holding their alternative ideas of physics concepts. This lead to advocate a deeper understanding of the process of learning and teaching physics and conditions to its efficiency through close collaborations between physics academics and physics education research academics.

Notwithstanding the proliferation of virtual tools of scientific communication, the use of real experiments presented live by science explainers remains extremely precious. Beyond obvious pedagogical and epistemological benefits, a demonstration is also source of ethical and esthetical reflections [1]. We will discuss some aspects and practical modes of the implementation of experimental popular fundamental physics shows (as performed in science museums in particular) according to the nature of the targeted public and the phenomena popularized.

*References*

[1] Trap, G. (2011). Le spectacle de la nature. In A. Giordan, J.-L. Martinand & R.-E. Eastes (Eds.), *L'idée de nature dans la médiation et l'éducation scientifiques, Actes des 31es Journées internationales de l'éducation scientifique*, (12 pages).

There has been much research on students’ misunderstandings of force. Some physicists have pointed out, however, that we do not know what force is. The most common definition of force in textbooks - force is the cause of acceleration - has been criticized for more than two centuries (d’Alembert, Carnot, Kirchhoff, Mach, Hertz, Poincaré, among others). By means of the simple Atwood machine problem it will be shown that the concept of force as cause of acceleration can be misleading [1,2,3]. It can be shown, however, that this concept of force is a logical consequence of the law of inertia. This law, in turn, cannot be tested experimentally (Planck, Schaefer, French, Nolting, among others). To avoid a statement of which we cannot be sure, the present paper proposes to understand the law through its function in the theory. In this case, we do not have to say how a free body moves, but rather that the rectilinear and uniform motion is the motion of reference in Newtonian mechanics. This enables us to overcome that difficulty with the concept of force.

*References*

[1] Poggendorff, J. C. (1853). Abänderung der Fallmaschine. Monatsberichte der Königlichen Akademie der Wissenschaften zu Berlin (pp. 627–629).

[2] Graneau, P., Graneau, N. (2006). In the grip of the distant universe: The science of inertia. New Jersey: World Scientific.

[3] Coelho, R.L. (2013) Could HPS Improve Problem-Solving?. Science & Education 22, 1043-1068. https://www.springerlink.com/content/l246453353g12818/fulltext.pdf

What is Logic? My answer consists of three steps: Logic is about Language, Language is about the World. Panlogism is the attempt to “short circuit” this process by identifying the real object and the “concrete-in- thought.” It leads to the assertion: Logic is about the World. If we abandon panlogism, we see that quantum logics (note the plural!) are just different ways of reformulating the same content. The danger: is that, If we accept the idea of one unique quantum logic, that provides all the answers, this prevents us from confronting the real questions about quantum mechanics.

I propose to begin an assessment of the debate on the intervention of the category theory in physics. 1- I begin by considering the classical relationship between geometry and physics using the example of Newton, of a formalism of the theory of GR and quantum mechanics. 2-I am going to show that categorical formulation provides numerous epistemological and philosophical benefits. I will then make a distinction between a local and a global reformulation. 2-I show why one must choose, and often develop a specific understanding of the theory categorized. The risk of the development of the theory of categories itself regardless of initial physical target is always incurred. At this level also the danger of leaving the physical out of it is always present. 4-I want to show how the risk itself is relevant for reasons both physical and mathematical. 5-I will discuss some relationships between geometrization and categorification. My analysis will focus on twistor theory.

We provide an algebraic formulation of C.Rovelli's relational quantum theory [1] that is based on suitable notions of “non-commutative” higher operator categories, originally developed in the study of categorical non-commutative geometry [2,3,4].

As a way to implement C.Rovelli's original intuition on the relational origin of space-time [5], in the context of our proposed algebraic approach to quantum gravity via Tomita-Takesaki modular theory [6], we tentatively suggest to use this categorical formalism in order to spectrally reconstruct non-commutative relational space-time geometries from categories of correlation bimodules between operator algebras of observables.

*References*

[1] C. Rovelli, *Relational Quantum Mechanics*, Int. J. Theor. Phys. 35 (1996) 1637 [arXiv:quant-ph/9609002]

[2] P. Bertozzini, R. Conti, W. Lewkeeratiyutkul, *Non-commutative Geometry, Categories and Quantum Physics*, [arXiv:0801.2826v2].

[3] P. Bertozzini, R. Conti, W. Lewkeeratiyutkul, *Categorical Non-commutative Geometry*, J. Phys.: Conf. Ser. 346 (2012) 012003.

[4] P. Bertozzini, R. Conti, W. Lewkeeratiyutkul, N. Suthichitranont, *Strict Higher C*-categories*, preprint(s) to appear.

[5] C. Rovelli, *Half Way Through the Woods*, The Cosmos of Science, J. Earman, J. Norton (eds.), University of Pittsburgh Press (1997) 180-223.

[6] P. Bertozzini, R. Conti, W. Lewkeeratiyutkul, *Modular Theory, Non-commutative Geometry and Quantum Gravity*, SIGMA 6 (2010) 067[arXiv:1007.4094v2].

We propose an unified framework, based on the notion of histories, which applies to time dynamics and to field theories. It accepts a Lagrangian and a Hamiltonian formulations, both entirely covariant in the case of field theories. Dynamical equations and conservation laws take a very simple universal expression. Developing differential calculus in the (infinite dimensional) space of histories, we define a generalized symplectic form. It leads very naturally to the multisymplectic formalism in field theories (the usual symplectic form for time dynamics appearing as a particular case) and it provides the usual on-shell symplectic form (in the space of solutions). We apply to first order canonical general relativity.

We describe the application of the continuous wavelet transform to calculation of the Green functions in quantum field theory: scalar theory, quantum electrodynamics, quantum chromodynamics. The method of continuous wavelet transform in quantum field theory presented in [1,2] consists in substitution of the local fields by those dependent on both the position and the resolution . The substitution of the action by the action makes the local theory into nonlocal one, and implies the causality conditions related to the scale , the *region causality* [2]. These conditions make the Green functions finite for any given set of regions by means of an effective cutoff scale .

*References*

[1] M.V.Altaisky, *Quantum field theory without divergences*, Phys. Rev. D 81(2010)125003

[2] M.V.Altaisky and N.E.Kaputkina, *Continuous wavelet transform in quantum field theory*, Phys. Rev. D 88(2013)025015

[3] J.D.Christensen and L. Crane, *Causal sites as quantum geometry*, J. Math. Phys. 46(2005)122502

This talk is based on Refs. [1,2,3]. We explore the idea that Quantum Field Theory could be grounded on a Quantum Cellular Automaton (QCA) model (i.e. a discrete unitary evolution of a lattice of quantum systems). In the -dimensional case, we show that the translation, parity and time-reversal symmetries lead to QCA model which recovers 1D Dirac field equation in the large scale limit. The discreteness of this model implies a breakdown of Lorentz covariance, which is replaced by a doubly special relativity model with an invariant lenght. Finally we will discuss the phenomenology of the QCA model and extensions to the three dimensional and interacting cases.

*References*

[1] A. Bisio, G. M. D'Ariano, A. Tosini *Quantum Field as a Quantum Cellular Automaton: the Dirac free evolution in one dimension* arXiv:1212.2839

[2] A. Bisio, G. M. D'Ariano, A. Tosini *Dirac quantum cellular automaton in one dimension: Zitterbewegung and scattering from potential* Phys. Rev. A **88**, 032301 (2013)

[3] A. Bibeau-Delisle, A. Bisio, G. M. D'Ariano, P. Perinotti, A. Tosini *Doubly-Special Relativity from Quantum Cellular Automata* arXiv:1310.6760

What did Einstein infer by stating [1] “Could we not reject the concept of matter and build a pure field physics? What impresses our senses as matter is really a great concentration of energy into a comparatively small space. We could regard matter as the regions in space where the field is extremely strong. ... A thrown stone is, from this point of view, a changing field, where the states of greatest field intensity travel through space with the velocity of the stone. There would be no place, in our new physics, for both field and matter, field being the only reality.”

One can indeed resolve the classical field equations in terms of infinite radial distributions for elementary masses and electrical charges [2]. Continuous field-type particles are counted by the left-hand side of the Einstein Equation where the Ricci scalar turns out to be geometrized mass density of such a non-empty, material space or inhomogeneous relativistic fluid [3]. Laboratory tests with precise clocks may distinguish in practice non-empty and empty space paradigms.

*References*

[1] A. Einstein and L. Infeld, emph{The Evolution of Physics}, Cambridge University Press, Cambridge U.K. (1954), pg. 257-258.

[2] I. Bulyzhenkov, *Einstein's gravitation for Machian relativism of nonlocal energy-charges *, Int. J. Theor. Phys. 47 (2008) 1261-1269.

[3] I. Bulyzhenkov, *Densities of electron's continuum in gravitational and electromagnetic fields*, Bullet. Lebedev Phys. Inst. 41, N1 (2014) 1-5.

The Cherenkov Telescope Array (CTA) will provide a new ground based gamma ray observatory with sensivity ten times better than existing gamma ray experiments. CTA will also have a much wider energy range and improved angular resolution.

The presentation will describe the telescope array which is reaching the final stages of design and prototyping. An vital first decision for CTA will be the choice of the sites and the status of this will be given together with the general schedule expected.

The science program will be presented with some focus on the plans for indirect searches for dark matter.

In the last few years, the rapid development of gamma-ray observations has open a new window on X-ray binaries (XRBs) sources, with at least one solid detection so far. Associated with these gamma-ray are neutrino emission. In this talk we introduce a new lepto-hadronic model that is based on successful work fitting the lower energy, broadband spectra of XRBs in the compact jet-dominated state. Protons (and electrons) are accelerated throughout the jet and cool via radiation and inelastic collisions, and we calculate spectral energy distributions (SEDs) including both hadronic and leptonic induced processes. We present new predictions for the gamma-ray fluxes in the GeV-TeV range. This model can also be scaled to describe the SEDs of low-luminosity active galactic nuclei such as our own Galactic Center black hole Sgr A*.

With more than 140 gamma-ray emitting pulsars detected, the Large Area Telescope (LAT) aboard the Fermi satellite has revolutionized the field of high-energy pulsar astronomy.

While the processes (e.g. curvature radiation, synchrotron, Inverse-Compton emission) thought to be responsible for the pulsed emission at high energies are well known, the details of the acceleration and radiation mechanisms at play are still widely debated through several models.

The analysis of the large data-set of events available from the brightest gamma-ray pulsars, six years after the launch of the Fermi satellite, reveals for these individuals a harder spectrum than previously measured (and expected) and hence increases the chance to detect them with ground-based Cherenkov Telescopes. The detection from ground at the highest energies brings in turn precious information allowing one to constrain severely existing models. The latest results will be presented, and prospects in the field will be discussed.

Cygnus Region contains the number of powerful sources of radio and X-rays which are supposed as a potential TeV-emitting objects. The results of 17-year observations of the Cyg X-3 at energies 0.8 - 85 TeV, detected by the SHALON telescope in 1995 are presented [1,2,3]. A number of high activity period of Cyg X-3 were detected with SHALON during the all period of observation. The significant increases of flux are correlated with flaring activity at lower energy range of X-ray and/or at observations of Fermi LAT. Also, we present the results of long-term observations of the Cygnus region which are revealed the -ray emission from the one of nearby object - Cygni SNR [3], placed at from Cyg X-3. The results of Cygni SNR observation by SHALON [3] are presented with spectral energy distribution, images and integral spectra at energies 0.8 - 50 TeV.

*References*

[1] V.G. Sinitsyna, Proc. of Towards a Major Cherenkov Detector-V ed. O.C. de Jager, (Wesprint-Potchefstrom, South Africa). 1997. v. 3. p. 273.

[2 ]V.G. Sinitsyna, Journal of the Physical Society of Japan, Supplement A. 78 (2009) p. 92.

[3] V. G. Sinitsyna and V. Yu. Sinitsyna, Bull. of the Lebedev Physics Institute, (New York: Allerton Press, Inc.), 40(5) (2013) p.2.

In this talk we will explore the possibility of testing dark energy and dark matter in the laboratory and the sky. As an example we will discuss chameleon cosmology models of dark energy and consider their challenges and limitations.

The propagation of gravitational waves on the background of non-perturbative vacuum filled with a spinor field is considered. It is shown that there are several distinction with the propagation of gravitational waves in a flat space: (a) there exists a phase shift between and waves; (b) there exists an attenuation of waves; (c) for given frequency there exist two waves with different wave vectors. The experimental verification of obtained results is discussed. It is shown that the experimental measurements of such waves give us the tools for the investigation of non-perurbative quantum field theories.

This talk is based on [1]. Plethora of inflationary models based on modified gravity have been built in last decades and first experimental probes seem to appear in favor of the inflationary paradigm.

In this talk, I will focus on one of them, the Higgs inflation [2], and show the combined constraints required for such a model at cosmological and gravitational scales, i.e. for compact objects. In particular, I will show that Higgs inflation model gives rise to particlelike solutions around compact objects, dubbed Higgs monopoles, characterized by the nonminimal coupling parameter, the mass and the compactness of the object. For large values of the nonminimal coupling, forbidden compactnesses appear.

*References*

[1] A. Füfa, M. Rinaldi, S. Schlögel, *Particlelike distributions of the Higgs field nonminimally coupled to gravity*, Phys.Rev. Lett. **111** (2013) 12, 121103 [gr-qc 1305.2640]; S. Schlögel, M. Rinaldi, F. Staelens and A. Füzfa, *Particlelike solutions in modified gravity: the Higgs monopole*, submitted to PRD [gr-qc 1405.5476].

[2] F.L. Bezrukov and M.E. Shaposhnikov, *The Standard Model Higgs boson as the inflaton*, Phys. Lett. B **659** (2008) 703, [hep-th0710.3755]

The successful data taking of the LHC experiments during run 1 allowed already for a detailed investigation of possible models with underlying supersymmetry as well as for the discovery of a Higgs boson. In this talk, the status of the experimental results will be briefly summarized and implications of the findings of LHC run 1 for viable models based on supersymmetry will be discussed. Constraints from direct searches as well as from indirect measurements, in particular the Higgs boson mass, will be taken into account.

Resonances decaying into a pair of particles are an obvious place to look for phenomena beyond the Standard Model. This talk summarizes recent results from the ATLAS and CMS experiments on searches for resonances in pairs of jets, leptons, bosons or a mix of those objects. Various models are considered from Z' and W' to the ADD large extra dimension scenario, from excited quarks or technicolor to quantum black holes as well as contact interactions. Results from TeV are presented.

It is widely acknowledged that the high-energy behaviour of quantum field theories should be governed by an ultraviolet fixed point. The fascinating idea that such a fixed point could be interacting, known as asymptotic safety, suggests that metric quantum gravity may well be fundamental in its own right %triggered much of the recent interest to understand the quantum theory for gravity [1]. In this talk, I discuss the origin for asymptotic safety in four-dimensional models of particle physics [2} and in quantum gravity [1,3,4]. These insights are then exploited to understand quantum gravity effects for black hole space-times and modifications of the classical laws of black hole thermodynamics, and entropy [5,6,7].

[1]
D. F. Litim,
*Renormalisation group and the Planck scale*,
Phil.Trans Roy.Soc. Lond. A **369** (2011) 2759
[arXiv:1102.4624 [hep-th]].

[2]
D. F. Litim and F. Sannino,
*Asymptotic safety guaranteed*,
arXiv:1406.2337 [hep-th].

[3]
D. F. Litim,
*Fixed points of quantum gravity*,
Phys. Rev.Lett. **92** (2004) 201301
[hep-th/0312114].

[4]
K. Falls, D. F. Litim, K. Nikolakopoulos and C. Rahmede,
*A bootstrap towards asymptotic safet/*,
arXiv:1301.4191 [hep-th].

[5]
K. Falls, D. F. Litim and A. Raghuraman,
*Black Holes and Asymptotically Safe Gravity*,
Int. J.Mod. Phys. A **27** (2012) 1250019
[arXiv:1002.0260 [hep-th]].
[6]
K. Falls and D. F. Litim,
*Black hole thermodynamics under the microscope*,
Phys. Rev. D **89** (2014) 084002
[arXiv:1212.1821 [gr-qc]].

[7]
D. F. Litim and K. Nikolakopoulos,
*Quantum gravity effects in Myers-Perry space-times*,
JHEP **1404** (2014) 021
[arXiv:1308.5630 [hep-th]].

We examine the general issue of whether a scale dependent cosmological constant can be consistent with general covariance, a problem that arises naturally in the treatment of quantum gravitation where coupling constants generally run as a consequence of renormalization group effects. The issue is approached from several points of view, which include the manifestly covariant functional integral formulation, covariant continuum perturbation theory about two dimensions, the lattice formulation of gravity, and the nonlocal effective action and effective field equation methods. In all cases we find that the cosmological constant cannot run with scale, unless general covariance is explicitly broken by the regularization procedure.

*References*

[1] H.~W.~Hamber and R.~Toriumi, "Inconsistencies from a Running Cosmological Constant",
Int. J. Mod. Phys. D **22**, no. 13, 1330023 (2013) [arXiv:1301.6259 [hep-th]].

In the Batalin-Fradkin-Vilkovisky (BFV) approach to quantization of gauge theories a principal role is given to the BRST charge which can be constructed as a series in Grassmannian (ghost) variables with coefficients given by generalized structure functions of constraints algebra. Alternatively, the BRST charge can be derived making use of the Noether theorem and global BRST invariance of the effective action. In the case of Yang - Mills fields the both methods lead to the same expression for the BRST charge, but it is not valid in the case of General Relativity. It is illustrated by examples of an isotropic cosmological model as well as by spherically-symmetric gravitational model which imitates the full theory of gravity much better. The consideration is based on Hamiltonian formulation of General Relativity in extended phase space . At the quantum level the structure of the BRST charge is of great importance since BRST invariant quantum states are believed to be physical states. Thus, the definition of the BRST charge at the classical level is inseparably related to our attempts to find a true way to quantize Gravity.

*References*

[1] T. P. Shestakova,*Class. Quantum Grav.* ** 28** (2011), 055009.

[2] T. P. Shestakova, to be published in: *Grav. Cosmol.* **20** (2014) p. 67-79.

Drinfeld twist deformation theory of modules and algebras that carry a representation of a Hopf Algebra can be extended to deform also morphisms and connections that are not -equivariant. In this talk I present how similar techniques allow to canonically deform principal -bundles, and in general how Hopf-Galois extensions are canonically deformed to new Hopf-Galois extensions.

Twisting the structure group we obtain principal bundles with noncommutative fiber and where the structure group is a quantum group. Twisting the automorphism group of the principal bundle we further obtain a noncommutative base space.

In recent years, there has been a lot of interest in studying the topological phases of quantum matter. A -theoretic approach was suggested by Kitaev, who produced a Periodic Table of topological insulators and superconductors [2]. We take the algebraic viewpoint, and study the gapped topological phases of free fermions through a twisted crossed product -algebra associated to the symmetry data of the system. Allowing for projective unitary-antiunitary representations, in the sense of Wigner, as well as charge-conjugation symmetries, leads to a -graded real twisted group -algebra, which completely encodes all the symmetry data of a quantum system. We define two -theory-type invariants of this algebra: the super-representation group classifies symmetry-compatible gapped phases, while the -theoretic difference-group classifies differences between stable homotopy classes of such phases. We also provide a consistent physical interpretation of these classification groups, which appears to vary between existing treatments in the literature. Our approach generalises, to the non-commutative setting, the twisted -theory approach of Freed and Moore [1]. It has the advantage of treating all symmetries on an equal footing, and powerful results from the -theory of crossed products are available. We recover Kitaev's Periodic Table as a special case, and clarify the origin of the periodicities and “dimension-shifts” in his table.

*References*

[1] Freed, D.S. and Moore, G.W., *Twisted Equivariant Matter*, Ann. Henri Poincaré, 14(8), pp. 1927–2023 (2013)

[2] Kitaev, A., *Periodic table for topological insulators and superconductors*, arXiv preprint arXiv:0901.2686 (2009)

The phenomenon of optical diffraction is crucial in order to recognize the wave behavior of light [1]. It limits the resolving power of optical instruments, including the human eye. Therefore it is of fundamental importance not only for practical applications, for example as in microscopy, but also in ability of our eye to distinguish two objects, as well as in our perception of one colors next to another, aspect exploited for example by Pointillists painters.[2]

The exploration of the optical diffraction in didactic laboratory with on-line sensors offers a unique opportunity to high school students to have experience of this important phenomenological context. Activities of computer modeling allow to pass from phenomenology to its interpretation. [3-5]

A research-based path was developed for upper secondary school approaching the optical diffraction through the experimental exploration of the diffraction pattern produced by a laser beam incident on a single slit. Students first analyze the diffraction pattern qualitatively, recognizing the global properties, then measuring with on-line sensors the light intensity vs position, constructing empirical relations between order and position of minimum, order and position of maximum, position and intensity of maximum. A computer modeling, based on Huygens' principle, is used to fit experimental data, showing the empirical relations, characterizing the experimental distribution and that obtained with the model. [3, 6]

Experiments in school was performed with 85 students, using IBL tutorial worksheets e pre-test, post-test. Positive learning paths of students emerged concerning the role of diffraction in everyday situations, activated by the qualitative analysis of the global properties of the diffraction pattern. The characteristic properties of the diffraction pattern, explored with on-line sensors, combined with the modeling activities aided students to move from a geometric point of view, based on rectilinear rays, to a physical one, based on an interference / intensity analysis.

*References*

[1] L. C. Pereira, J. A. Ferreira, H. A. Lopes eds., Light and Information, Girep book, Univ. do Minho, Braga. (1993).

[2] F. Corni, M. Michelini, G. Ottaviani, Material Science and optics in the Arts: Case studies to improbe Physics Education, in Teaching and learning physics in new contexts, Mechlova E, Konicek L. eds., selected papers in Girep book, Ostrava Czech Republic [ISBN 80-7042-378-1], (2004), pg. 97-99.

[3] K. Wosilait, P. R. L. Heron, P. S. Shaffer, C. L. McDermott, Addressing student difficulties in applying a wave model to the interference and diffraction of light, Phys. Educ. Res. Am. J. Phys. Suppl. 67 (7) (1999) pg. S5-S15.

[4] F Corni, V Mascellani, E Mazzega, M Michelini, G Ottaviani, A simple on-line system employed in diffraction experiments, in Light and Information, Girep book, L C Pereira, J A Ferreira, H A Lopes Editors, Univ. do Minho, Braga (1993), pg.381-388.

[5] K. Hirata How can we use microcomputers effectively in teaching and learning physics?, Communicating Physics, ICPE (IUPAP), (1998) pg.132.

[6] M. Michelini, A. Stefanel, L. Santi, Teacher training strategies on physical optics: experimenting the proposal on diffraction, in Quality Development in the Teacher Education and Training, Michelini M ed., selected papers in Girep book s, Forum, Udine [ISBN: 88-8420-225-6], (2004), pg.568-576.

The teaching of physics for first year post-bac students is confronted to increasing difficulties since several years. This problem is reinforced this year in France due to a new reform of the secondary educational system. We have experienced a new pedagogical method in the cursus “préparation aux écoles d'ingénieurs polytech”, which is the first year of the Polytech engineer school of the Aix Marseille University.

The pedagogical method is a mixing between “problem based learning” and “peer instruction”. The results of the method are very encouraging.

The outline of this talk is the following :

– listing of problems encountered by students and teachers

– presentation of the new pedagogical method

– results of the first year's experimentation

In his classical paper “On Sense and Reference” [1] Frege asks: In which sense the Morning Star (MS) and the Evening Star (ES) are the same planet Venus? As Frege observes the assertion of identity MS=ES unlike the assertion of identity MS=MS (or MS=MorningStar) has empirical content and in this respect is not trivial. So it is unclear how the same notion of identity may apply in these two very different cases. Frege solves the problem by distinguishing between the sense and the reference of a given linguistic expression: although sentences MS=MS and MS=ES have the same reference “true” they still have different senses (i.e., different meaning).

Notice that Frege's example is not purely linguistic: in his time similar questions concerning identities of some comets and asteroids remained open; in today's astronomy the identification of new astronomical sources is never given for free but always has a complicated procedural character. Although Frege managed to explain how the identity MS=ES can hold in principle, his theory of identity provides no clue to how this or any other identity occurring in empirical sciences can be possibly grounded and justified. Since Frege's theory does not take the issue of empirical justification into account it remains largely irrelevant to the practice of empirical sciences. Given that the naive pretheoretical understanding of identity proves insufficient at least in some areas of the modern science (think about the particle physics) the need of a new formal approach to identity in physics and other empirical sciences seems me obvious.

In this paper I develop a constructive approach to identity in physics based on Martin-Löf's Constructive Type theory [2] and Voevodsky's Homotopy Type theory [3]. While earlier attempts to modify the classical identity were mostly motivated by the idea of its `weakening' (as in the case of Krause's theory of quasi-sets, for example), the Homotopy Type theory allows for a view on identity as a construction from available empirical data and theoretical predictions/retrodictions. Suppose after Frege that identities of MS and ES are somehow fixed beforehand. This means that one is in a position to identify two independent observations of MS as observations of one and the same object; similarly for ES. In Martin-Löf's theory such presupposed identities are called definitional, while the non-trivial identity MS=ES counts as propositional. In order to establish proposition MS=ES one uses available observational data (along with a theory allowing for predictions and retrodictions of future and past positions positions of celestial bodies) and reconstructs a continuous path (trajectory) from MS to ES. In the classical celestial mechanics such a continuous trajectory indeed qualifies as the wanted identity proof, namely, as the wanted evidence of the fact that MS and ES is in fact one and the same planet continuously moving from its morning position to its evening position.

Modern physics provides contexts where such `identity paths' are multiple and support non-trivial homotopic structures. Two obvious examples are gravitational lensing and Feynman path integrals (interpreted in terms of multiple paths of the same particle). In both these cases the identification (of sources and particles correspondingly) involves not only paths but also their homotopies (i.e. `paths between paths' or `2-paths'). Thus in these cases the 2nd order identities (in the sense of Martin-Löf's theory) also acquire a physical meaning. Whether still higher-order identities may equally have some physical meaning remains a research question. A recent work by Schreiber [4] where higher-order identities are understood as gauge transformations suggests the answer in positive. Pushing this line one may tentatively consider the Homotopy type theory as a general logico-mathematical framework for representing physical objects: in this framework objects are represented by higher-order homotopy groupoids, which determine the objects' identity types and also their topological properties.

Tentative applications of higher identity types in empirical contexts help one to clarify the distinction between the definitional and the propositional identity Martin-Löf's theory from an epistemological standpoint. Admittedly one cannot proceed a scientific reasoning without taking some notion of identity for granted. In Martin-Löf's theory this role is played by the definitional identity. However there is no reason to consider the definitional identity as fundamental. We treat the identity MS=MorningStar as definitional simply because we have decided (after Frege) not to analyze the way in which different phenomena observed at different times by different people are identified as observations of the same Morning Star. In a different context the non-trivial empirical character of this identity can be similarly taken into account. Thus a definitional identity has the character of explicit assumption (that can be questioned and analyzed if needed) rather than that of ultimate foundation.

*References*

[1] G. Frege, *On Sense and Reference*, in: Translations from the Philosophical Writings of Gottlob Frege, ed. by Geach and M. Black, Oxford: Basil Blackwell 1952, pp. 56-78.

[2] P. Martin-Löf, *Intuitionistic Type Theory (Notes by Giovanni Sambin of a series of lectures given in Padua, June 1980)*, Napoli: BIBLIOPOLIS, 1984

[3] V. Voevodsky, *Homotopy Type Theory: Univalent Foundations of Mathematics*, Institute for Advanced Study (Princeton) 2013; available at http://homotopytypetheory.org/book/

[4] U. Schreiber, *Quantization via Linear homotopy types*, arXiv:1402.7041

Since nearly two decades, a decline of interest in scientific studies has entailed the choice of new objectives for science teaching in many countries. To put it briefly, affective factors like motivation and the development of competencies, for instance critical analysis, have received much attention, as well as new approaches to teaching, like Inquiry Based Science Education. Although multiple learning benefits are invoked in each case, also for the future citizen, there is often, *de facto*, a trend toward less conceptual development and structuring, be it in teaching objectives or in students' achievements. I will briefly discuss the risks of oversimplification and teaching rituals in physics, and the need for developing a critical stance in students. I will then discuss, based on two investigations at upper secondary or university level (hot air balloon, radio carbon dating), the idea that a competence like critical analysis should not be envisaged separately from a minimum conceptual development. The final discussion will bear on implications for teaching.

A second part of this talk will be given in a session about physics education: From a subtractive to multiplicative approach, two concept-driven interactive pathways on the selective absorption of light.

The *traditional view* is that a theory is a conceptual framework providing predictions, and the results of experiments or observations decide whether the theory is right or wrong. It will be contrasted with the *modern view* that one must incorporate the conditions of applicability of a concept into the very meaning of the concept (*measurability analysis*), and that only a series of theories (*scientific research program*) can be said to be scientific or unscientific. This modern view will be applied to a number of questions in quantum mechanics (*what is quantization?, states vs processes, open vs closed systems*) and quantum field theory (*particles and field quanta, bosons vs fermions*), and to the search for a theory of quantum gravity (*background independent vs fixed background theories*).

The origin of the galaxies represents an important focus of current cosmological research, both observational and theoretical. Its resolution involves a comprehensive understanding of star formation and evolution, galaxy dynamics, supermassive black holes, and the cosmology of the very early universe. I will review our current understanding of galaxy formation and describe some of the challenges that lie ahead. Specific issues that I will address include the star formation rate in galaxies and the galaxy luminosity function, including the role of feedback.

I will review some of the recently uncovered connections between dark energy and modified gravity. Dark energy involves light scalar fields which would naturally lead to deviations from Newton's law in the solar system. Those are extremely constrained by gravity tests. I will present how one can reconcile dark energy on large cosmological scales with gravity as tested in the solar systems. For that, I will introduce screened modified gravity models and discuss their properties.

Motivated in part by the wish to “replace” dark energy by a large distance modification of gravity, a large body of works has lead to a better understanding of properties and pathologies of theories of “massive gravity”, and closely related models such as “Galileons”. This body mainly developped from the Dvali-Gabadadze-Porrati (DGP) model which was proposed almost 15 years ago - and was the first framework which linked explicitly the cosmic acceleration with a large distance modification of gravity - and culminated with the more recent de Rham-Gabadadze-Tolley (dRGT) theory which is now believed to avoid certain pathologies present in previous constructions. Inbetween, these works also lead to several other proposals, many of which using the “Vainshtein mechanism” to hide at intermediate distances effects which only show up at cosmological scales. I will review these works stressing in particular the left over open questions.

The discovery of the accelerated expansion of the Universe has come relatively late in our study of the cosmos, but in showing that gravity can act repulsively, it has opened up many new questions about the nature of gravity and what the Universe might contain. Is the acceleration being driven by dark energy? Or is general relativity (GR) itself in error, requiring a modification at large scales to account for the late acceleration? Structure formation in our Universe can be different even if the geometry of the homogeneous and isotropic universe is the same in these two classes of models, offering a possibility to distinguish between them observationally. Non-linear structure formation is complicated by the fifth force that commonly appears in modified gravity models and new techniques are required to analyse it. We will discuss novel methods to test GR on cosmological scales, building on the recent developments of N-body simulations for modified gravity.

The relativistic ejections of plasmas from black hole environments and pulsars lead to the production of High energy radiations and cosmic rays, possibly ultra high energy cosmic rays up to a few 1020 eV, through a special kind of shocks. A special kind of self-sustaining, nonlinear structure, called collisionless relativistic shock, will be presented, which is considered as explaining the high energy phenomena as the interplay of a front made of an electromagnetic barrier, the generation of a very intense magnetic turbulence and the generation of a population of high energy particles. Numerical simulations, theoretical developments and possible experiments at powerful laser facilities of these relativistic collisionless shocks have stimulated a significant progress in high energy astrophysics nowadays.

*References*

[1] A. Spitkovsky, 2008, ApJ, 673, L39

[2] I. Plotnikov, G. Pelletier, M. Lemoine, 2013, MNRAS, 430, 1280

[3] M. Lemoine, G. Pelletier, L. Gremillet, I. Plotnikov, 2014, MNRAS, 440, 1365

Ultra-relativistic heavy ion collisions allow us to study the densest and hottest forms of matter that can be created in the laboratory, states of matter that have existed in the early universe for only a brief instant, a few microseconds after the big bang.

In this talk, I shall present a short overview of the latest developments in the field, by choosing a few highlights from the results obtained at the LHC. I shall also discuss the evolution of ideas and concepts that have been triggered by these experiments.

Three fundamental problems in the field of UHE astrophysical particles are reviewed.

UHE particles are observed at energies higher than eV, with eV as the highest energy. In principle, in cosmology there are the reliable mechanisms of particle production with energies much higher than eV (e.g. Topological Defects or Super Heavy Dark Matter), but this production most probably cannot explain the observational data. It is widely argued nowadays that traditional acceleration, e.g. acceleration by relativistic shocks, cannot provide the observed highest energies.

The other fundamental problem is propagation of protons and nuclei in extragalactic space. This problem is studied thoroughly theoretically with prediction of spectral features, dip and GZK cutoff, for protons, which are observed in data of HiRes and Telescope Array, but contradict to mass composition measured by of Auger.

The third fundamental problem is cosmogenic neutrinos, produced by interaction of UHE protons and nuclei with background radiation CMB and EBL. Neutrinos detected by IceCube in 2010 - 2012 do not correspond to standard predictions, and detection of cosmogenic neutrinos probably expects the future space detector JEM-EUSO.

Gamma-ray bursts are the most powerful explosions in the Universe. They appear as brief bursts of gamma-rays followed by an afterglow at longer wavelengths. They involve a complex physics with a relativistic ejection by a new-born compact stellar mass source, and several phases of emission associated to internal dissipation in the ejecta and to the interaction of the ejecta with its environment. I will review the observations of these phenomena and discuss our current understanding of their physical origin.

After a decade of operation of third generation Imaging Atmospheric Cherenkov telescopes, not only the number and our knowledge of very high energy -ray ( 100 GeV) extragalactic sources have increased significantly, the use of their emission, either as probes of the intergalactic medium for cosmological applications, or as tools for exploring frontiers of fundamental physics, has revealed itself a very essential tool. I will review the field and discuss the latest results from major observatories.

Since its launch by NASA in June 2008, the Fermi satellite has opened a new window on the gamma-ray sky. On-board Fermi, the Large Area Telescope (LAT) detect photons with an energy from 20 MeV to more than 300 GeV. After almost 6 years of monitoring of the sky, the LAT has increased our knowledge of the extragalactic sky with unexpected and sspectacular discoveries like the Cen A Lobes, a detailed measurement of the EBL and more. In this talk, the review of the most important results in the extragalactic sky of the last 6 years will be made as well as what can be expected for the future.

At the Pierre Auger Observatory, several searches for anisotropies in the distribution of arrival directions of cosmic rays detected above ~100 PeV are undertaken. Although no significant deviation from isotropy is revealed at present, some measurements related to the angular distributions at large scales are suggestive of dipole patterns of small amplitudes over a wide energy range. Upper limits on the dipole and quadrupole moments derived from these analyses constrain scenarios in which cosmic rays could originate from stationary Galactic sources emitting in all directions up to the ankle energy. This is supported by the absence of any detectable point-like sources above 1 EeV that would be indicative of a neutron flux. At ultra-high energies, searches for correlations of cosmic rays with populations of nearby extragalactic objects are also presented.

We explore the possibility for generalized gauge invariant -form theories on flat space-time. `Galileons' are a well known example of scalar fields (0-form) that has non-linear extension of the second derivative terms in field equations. We prove that there is an obstruction to extending to non-linear order in the second derivatives for gauge invariant -forms when is odd. That is the equations of motion for the -form field are at most linear in the second derivative of the field.

This talk will be mostly based on [2]. I will discuss cosmology of the recently introduced “mimetic” modification of general relativity [1]. This modification is manifestly Weyl-invariant and can mimic practically any cosmological époque [2, 3]. In particular, this “mimetic” modification can naturally account for a part of dark matter [2, 3]. It is also possible to unify such dark matter with dark energy as it was shown, in [3]. Finally one can construct an interesting single-field model for the early universe inflation. This model has parametrically suppressed gravity waves but does not seem to produce any non-Gaussianity, moreover the usual consistency relation for the single-field inflationary models gets modified.

*References*

[1] A. H. Chamseddine, V. Mukhanov, *{Mimetic Dark Matter*}, JHEP **1311** (2013) 135, arXiv:1308.5410

[2] A. H. Chamseddine, V. Mukhanov, and A. Vikman, *{Cosmology with Mimetic Matter*}, JCAP **1406** (2014) 017, arXiv:1403.3961

[3] E. A. Lim, I. Sawicki, and A. Vikman, *{Dust of Dark Energy*}, JCAP **1005** (2010) 012, arXiv:1003.5751

The discovery of the accelerating expansion of the Universe is motivating an impressive amount of theoretical and observational activity. I will focus on recent and ongoing works that aim at a unifying description of dark energy and modified gravity models containing a scalar degree of freedom in addition to general relativity. Such an effective field theory approach allows, on the one hand, a transparent analysis of the possible theoretical mechanisms at the basis of dark energy; moreover, it provides a useful set of parameters that can be efficiently constrained with observations. I will show the present observational constraints based on the growth rate of cosmic structures and the forecasts for future surveys such as EUCLID.

This talk presents an interpretation of the cosmological constant and of the dark energy, in terms of deformation energy density of space-time. The latter is treated as an elastic medium, whose Lagrangian density is built of terms proportional to the lowest (in practice the second) order scalars associated to the strain tensor, i.e. the difference between the actual and the Euclidean metric tensors, and its first and second order derivatives [1]. The new Einstein equations contain an additional 'source' corresponding to the deformation energy density of space-time. Global symmetries may be induced by texture defects in the manifold [2]. Solutions endowed with Robertson-Walker [2], Schwarzschild and Kerr symmetries are discussed.

*References*

[1] A. Tartaglia and N. Radicella, *A tensor theory of spacetime as a strained material continuum*, Class. Quantum Grav., Vol. 27 (2010) 035001.

[2] N. Radicella, M. Sereno and A. Tartaglia, *Cosmological constraints for the cosmic defect theory*, Int. J. Mod. Phys. D, Vol. 20, No. 6 (2011) 1039‚ 1051.

[3] N. Radicella, M. Sereno and A. Tartaglia, *Dark energy as an elastic strain fluid* MNRAS Vol. 429 (2013), 1149‚ 1155.

Violation is one of the necessary ingredients to produce the matter anti-matter asymmetry we observe in the Universe today. The LHCb experiment is a general purpose forward-spectrometer located along the LHC proton-proton collider at CERN and is ideally suited for the investigation of such phenomena. We present the latest results of 3-body charmless decays where large local violating effects have been observed across various regions of the phase space [1,2]. These results are discussed in the light of recent theoretical developments that attempt to understand the origin of the large asymmetries [3] and their impact on future amplitude analyses.

*References*

1] R. Aaij *et al.* (LHCb Collaboration), *Measurement of Violation in the Phase Space of and Decays*, Phys. Rev. Lett. **111** (2013) 101801.

[2] R. Aaij *et al.* (LHCb Collaboration), *Measurement of Violation in the Phase Space of and Decays*,
Phys. Rev. Lett. **112** (2014) 011801.

[3] Z.-H. Zhang, X.-H. Guo, and Y.-D. Yang, Phys. Rev. D **87** (2013) 076007; B. Bhattacharya, M. Gronau, and J. L. Rosner, Phys. Lett. B **726** (2013) 337; I. Bediaga, O. Lourenço, and T. Frederico, Phys. Rev. D **89** (2014) 094013; D. Xu, G.-N. Li, and X.-G. He, arXiv:1307.7186 (2013).

Recent angular analyses of decays at LHCb are in tension with the standard model predictions. Phenomenological interpretations of these results imply hints of a contribution from a new vector particle. These results are difficult to account for within supersymmetric models. This talk will cover the experimental aspects of performing such an angular analysis using the LHCb detector as well as the theoretical motivation for making such a measurement and the rationale behind the choices of observables. I will devote some time to reviewing the phenomenological interpretations.

As the number of flavours in QCD is increased beyond some threshold, chiral symmetry is restored and long-distance physics becomes scale invariant. The region of parameter space in which this happens is called “conformal window”. The phenomenology of theories in the conformal window or close to it is very different from real-world QCD, and clearly shows how rich gauge theories can be. Near-conformal gauge theories have been conjectured to be viable models for physics beyond the Standard Model, and as they might naturally incorporate a Higgs-like scalar, and in this sense they are very different from traditional techicolor theories. As these theories are inherently non-perturbative, lattice simulation provide a unique setup to investigate them from first principles. In this talk I will review the main results and open questions in this field, the challenges ahead, and the techniques used.

Although the constituent quark model correctly predicts the quantum numbers of nearly all known hadrons, it fails to account for so-called 'exotic' states. After briefly reviewing the current experimental status of some candidate exotic hadrons, I will discuss prospects for elucidation of their properties using numerical Lattice QCD simulations. Recent theoretical advances have greatly improved lattice calculations of hadronic resonances, but some of the most interesting systems (such as exotic charmonium states) remain a challenge.

We describe how to reconstruct a constant curvature tetrahedron given four holonomies whose product is the identity. We construct a phase space describing the shapes of curved tetrahedra. We give a conjecture for the generalization of Minkowski's theorem to spaces of constant curvature. This establishes a new route to the description of four dimensional quantum gravity with a cosmological constant.

Several new applications for LQG will be presented. The first one is a new quantum representation of the gravitational scalar constraint. In this representation, for the first time in the literature, the quantum C(N) itself preserves the Hilbert space for every laps function . Owing to that property, solutions to the quantum constraints of vacuum GR can be well defined by the spectral decomposition and set a physical Hilbert space. Our representation admits also a new proposal of the quantum physical Hamiltonian of the Rovelli-Smolin model of gravity coupled to the mass-less Klein-Gordon time field. The third application we propose, is a new operator for the quantum Hamiltonian of the Klein-Gordon Scalar field coupled to LQG. Our new framework captures the degrees of freedom of the scalar field lost in the framework in which time is deparametrized by the scalar field.

We will review the current developments of Quantum Reduced Loop Gravity [1,2], a recently proposed model to address the quantum dynamics of the early Universe. In particular we will discuss it's semiclassical limit, a link with LQC [3] and will review how the QRLG technique naturally selects states based on coherent intertwiners that could simplify the analysis of the dynamics in the full theory [4].

*References*

[1] E. Alesci and F. Cianfrani, *A new perspective on cosmology in Loop Quantum Gravity*, Europhys. Lett. **104**, 10001 (2013); arXiv:1210.4504 [gr-qc].

[2] E. Alesci and F. Cianfrani, *Quantum-Reduced Loop Gravity: Cosmology*, Phys. Rev. D **87**, no. 8, 083521 (2013); arXiv:1301.2245 [gr-qc].

[3] E. Alesci and F. Cianfrani, *Quantum Reduced Loop Gravity: Semiclassical limit*, arXiv:1402.3155 [gr-qc].

[4] E. Alesci, F. Cianfrani and C.~Rovelli, *Quantum-Reduced Loop-Gravity: Relation with the Full Theory*, Phys. Rev. D **88**, 104001 (2013); arXiv:1309.6304 [gr-qc].

Coherent States (CS) are widely used in physics, and quantum gravity is not an exception. However, the application of CS in the construction of operators is rather unexplored in the areas of quantum gravity and quantum cosmology. In my talk, I will present how CS can be used to define “coherent state operators” via a procedure known as “coherent state quantization” [1]. This procedure produces operators with inbuilt good semiclassical properties, while at the same time preserving typical quantum attributes (e.g., discreteness of spectra). Examples of such coherent state quantization will cover: (1) operators on homogeneous isotropic quantum cosmology [2] (based on the affine group); (2) possible extensions to Bianchi I case; (3) simple operators on .

*References*

[1] S. Twareque Ali, J.-P. Antoine and J.-P. Gazeau, Coherent States, Wavelets, and Their Generalizations, Springer-Verlag (2013)

[2]H. Bergeron, A. Dapor, J.-P. Gazeau and P. Malkiewicz, *Smooth big bounce from affine quantization\/*, Phys. Rev. D **89**, 083522 (2014)

We prove that the quantum field theory on noncommutative Moyal space is, in the limit of infinite noncommutativity, exactly solvable in terms of the solution of a non-linear integral equation. Surprisingly, this limit describes Schwinger functions of a Euclidean quantum field theory on standard which satisfy the easy Osterwalder-Schrader axioms boundedness, covariance and symmetry. We prove that the decisive reflection positivity axiom is, for the 2-point function, equivalent to the question whether or not the solution of the integral equation is a Stieltjes function. The numerical solution of the integral equation leaves no doubt that this is true for coupling constants .

Perturbative algebraic quantum field theory is a formalism which allows to put perturbative QFT on a solid mathematical basis and solves many conceptual problems. It has proven to be a very successful framework for QFT on curved spacetimes, since it allows to separate the algebraic structure of the theory from the construction of a state. The main idea, inspired by the Haag-Kastler axiomatic framework, is to define a model of a QFT by giving a net of unital -algebras, assigned to regions of spacetime. To construct such a model, one starts with a free classical theory, then obtains the free quantum theory via deformation quantization and finally introduces the interaction by means of Epstein-Glaser renormalization. In this overview talk I will show how this method works in particular examples and I will report on recent results.

In this talk, I will review some recent results about the renormalization of Tensorial Group Field Theories. These theories are motivated by an approach to quantum gravity which lies at the crossroad of tensor models and loop quantum gravity. From the mathematical point of view, they are quantum field theories defined on compact Lie groups, with specific non-local interactions. Interestingly, these non-localities can be controlled and several models have now been proven perturbatively well-defined [1, 2, 3, 4, 5, 6]. I will focus on a model inspired by Eucliden quantum gravity, which has been proven renormalizable at all orders with up to interactions [6]. Time allowing, I will also present new results about the renormalization group flow of this model.

*References*

[1] J. Ben Geloun and V. Rivasseau, “A Renormalizable 4-Dimensional Tensor Field Theory,”
Commun. Math. Phys. **318**, 69 (2013) [arXiv:1111.4997 [hep-th]].

[2] J. Ben Geloun and D. O. Samary, “3D Tensor Field Theory: Renormalization and One-loop -functions,” Annales Henri Poincare **14**, 1599 (2013) [arXiv:1201.0176 [hep-th]].

[3] J. Ben Geloun and E. R. Livine, “Some classes of renormalizable tensor models,” J. Math. Phys. **54**, 082303 (2013) [arXiv:1207.0416 [hep-th]].

[4] S. Carrozza, D. Oriti and V. Rivasseau, “Renormalization of Tensorial Group Field Theories: Abelian Models in Four Dimensions,” Commun. Math. Phys. **327**, 603 (2014) [arXiv:1207.6734 [hep-th]].

[5] D. O. Samary and F. Vignes-Tourneret, “Just Renormalizable TGFT's on with Gauge Invariance,” Commun. Math. Phys. (2014) [arXiv:1211.2618 [hep-th]].

[6] S. Carrozza, D. Oriti and V. Rivasseau, “Renormalization of an Tensorial Group Field Theory in Three Dimensions,” Commun. Math. Phys. (2014) [arXiv:1303.6772 [hep-th]].

This talk will be devoted to a type of teaching strategy - concept-driven intellectual pathways ‚ aiming at developing conceptual understanding and critical faculty in students. The examples will bear on interaction between light and matter at university level (absorption of light by pigments, liquids, the atmosphere) and will illustrate how formal simplicity main be compatible with some exigent approach linking a search for conceptual coherence and a critical analysis.

The LHCb experiment is a general purpose spectrometer in the forward region optimized for precision studies of beauty and charm hadron properties. The volumes of data produced by the LHC make it possible to perform such precision measurements with only a fraction of the total LHCb dataset, making it an ideal playground for developing new types of masterclass exercises.

We present here LHCb's first foray into the masterclass programme, in which students are taught how to make a 1% precision measurement of the D0 meson lifetime. The students learn to separate D0 mesons from the background in LHCb's vertex detector and build up a signal mass peak, before learning about how to use background sidebands to extract the signal properties in other variables of interest. They then perform a fit to measure the D0 lifetime, and discover that the result is biased by charm produced in the decays of B hadrons, which has an artificially long lifetime compared to charm produced directly in the proton-proton collision. Finally, the students learn how to separate these two kinds of charm based on the D0 mesons distance of closest approach to the primary interaction and obtain a precise measurement in agreement with the world average. We present the software framework developed for this exercise, give a demonstration of the exercise, and discuss plans for the future.

“Balzac used to say that bachelors replace feelings with habits. Likewise, professors replace discoveries with lessons. Against this intellectual indolence that progressively deprives us of our sense of spiritual novelties, the teaching of discoveries along the history of sciences happens to be a great help. In order to teach students own to invent, it is good to give them the feeling that they could have discovered.” [1]

University courses in Physics, such as we led them over years, do not work out anymore. Starting from this observation, we propose a reaction consisting in the rewriting and enrichment of their content [2,3]. As an example, we will present the structure of a course of basic geometrical optics. This course, focused on the essential part of the usual formal contents, gives priority to discovery, self-investment and reflection. And this is made possible by a structural interaction between the classic formalism of physics and fundamental questions arising from history of sciences.

This work is based on the collaborative work held by physicist O. Morizot (PIIM, AMU) and historian of mathematics Ph. Abgrall (CEPERC, CNRS/AMU).

*References*

[1] G. Bachelard, *The formation of the scientific mind*, (1938).

[2] P. Mahaffy, *Moving Chemistry Education into 3D: A Tetrahedral Metaphor for Understanding Chemistry. Union Carbide Award for Chemical Education*, J. Chem. Educ. (2006), 83, 49.

[3] O. Morizot, E. Audureau, J.-Y. Briend, G. Hagel and F. Boulc'h, *Two applications of the concept of human element in chemistry teaching; for a thorough understanding of chemical concepts*, submitted to J. Chem. Educ.(2014).

This research focuses on written language and systems of signs to study the learning of scientific concepts [1]. It proposes to show that students (Grade 10 [aged 15]) mobilize ideas about gas in different ways in regard to the semiotic registers (text or diagram) proposed during an assessment. Our study adopts a socio-constructivist approach of learning and develops a theoretical framework, articulating elements from the didactic of physics with semiotics concepts of Duval [2]. We give a test with questions (using simultaneously text and diagram) to approximately 90 students just after a teaching sequence on gas. We categorize with the software Sphinx students' answers and diagram. This analysis has excellent intra-analyst reliability of our coding with the best level of reproducibility (Cohen's Kappa test) and all our results are statistically significant (test of Khi2). Our results show that: (I) situations affect the mobilization of students' ideas in regard to the semiotics registers involved in the assessment, (II) the semiotics registers have an effect on the mobilization of students' ideas according to the facets of knowledge. Students are much more efficient in the register of: (a) schema to use facets about *particles contained in gas* and it homogeneous distribution and (b) text to mobilize their ideas about the *action of gas*, (III) the semiotic registers used in the tasks of the teaching sequence on gas could possibly be related to the mobilization of students' ideas in the semiotics registers involved during the assessment. The implications of this work in teaching and research in didactic of physic are important. It allows to have a better consideration of the semiotic register to assess students' knowledge.

*References*

[1] J. L., Lemke, Multiplying Meaning: Visual and Verbal Semiotics in Scientific Text, In J. R. Martin & R. Veel (Éd.), Reading Science, Routledge : London (1998)

[2] R, Duval . Sémiosis et pensée humaine: registres sémiotiques et apprentissages intellectuels. Peter Lang : Neuchatel (Suisse) (1995)

We discuss the problem of theory change in physics. We propose a characterization of the concept of a theory in physics based on symmetries. The proposed characterization is compatible with the modern ideas in philosophy of science, e.g. the semantic approach to a scientific theory. Our approach is conceptually simple. It also allows an analysis of the problem of the mathematical structure and hints at a logic of discovery. As a case-study we consider the theory of kinematics of particles in distinct space-time backgrounds. The problem of theory change can be framed in terms of the notions of Inonu-Wigner contraction/extension of groups of symmetry. The notions presented here are strongly based on the notorious Bargmann-Wigner program.

The talk is based on joint work with Marc Lachieze-Rey and Samuel Simon [1] .

*References*

[1] M. Lachieze-Rey, A. de Queiroz, S. Simon, *Symmetry, Physical Theories and Theory Changes*, *to be published*.

Kurt Gödel left Philosophical Remarks in his Nachlass that he himself entitled Max Phil. The opus originally comprised 16 notebooks but one has been lost. The content is on the whole the outline of a rational metaphysics that allows us to relate the different academic disciplines to each other and in particular logic, mathematics, physics, biology and theory of knowledge.

The notion of time is most interesting for Gödel because it is a subject that connects humanities with the sciences. Therefore he engages with it quite often and quite intensely. This is also true for the concept of force. His transdisciplinary approach is to contemplate ‘force’ as a notion in physics (gravitation and quantum mechanics), as a notion in psychology (affects and emotions) and as a notion in biology, albeit he has a specific interest in physics concerning the concept of force. He is looking for an interpretation of the concept that would allow him to align the theories of relativity and quantum mechanics. Besides that the preoccupation with light quantum gives Gödel an opportunity to reflect on the differences between light and matter. We will present the main structure of Max-Phil, which as been partially transcribed and translated by a international team funded by the ANR project BLA -09-13 and focus on some remarks concerning the relationship between mathematics and physics.

Gödel's contribution to GR [1] have been criticized by several first rank physicists like Chandrasekhar, R.Ellis, S.Hawking, R.Penrose, J.Wheeler [2]. These criticisms (lack of physical meaning, violation of causality,...) contrast with Gödel own description of his work: “[It] relates to the pure theory of gravitation published in 1916 which, I believe, was left, not only by Einstein himself but also by the whole generation of contemporary physicists, in its state of a torso, physically, mathematically, and with respect to its application in cosmology.” I will, in first place, explain why reccurent crticisms addressed to Gödel's exact solutions of Einstein's field equations are unjustified. In second place, I will illustrate in what sense Gödel's contributions to GR is what Einstein tried to do in 1917, namely a completion of the theory of gravitation.

*References*

[1] Kurt Gödel, Collected Works, Volumes I-V, ed. S. Feferman et al., Oxford University Press, 1996-2004.

[2] C.W. Misner, K.S.Thorne, J.A.Wheeler, Gravitation, W.H.Freeman & Co, 1973.

The acceleration of the expansion can now be regarded as established beyond reasonable doubt. However, the physical origin of this acceleration, the so-called dark energy, remains a puzzling problem in fundamental physics. The current explanations, quintessence and modified gravity, are based on a fundamental revision of known physics.

Here I will present a revival of an old proposition : that the origin of cosmic acceleration is due to the gravitational active vacuum.

The talk is based on the results of Ref. [1], where we compute the Zero Point Energy in a spherically symmetric background combining the high energy distortion of Gravity's Rainbow [2] with the modification induced by a theory. Here is a generic analytic function of the Ricci curvature scalar R in 4D and in 3D. The explicit calculation is performed for a Schwarzschild metric [3]. Due to the spherically symmetric property of the Schwarzschild metric we can compare the effects of the modification induced by a theory in 4D and in 3D. We find that the final effect of the combined theory is to have finite quantities that shift the Zero Point Energy. In this context we setup a Sturm- Liouville problem with the cosmological constant considered as the associated eigenvalue. The eigenvalue equation is a reformulation of the Wheeler-DeWitt equation which is analyzed by means of a variational approach based on gaussian trial functionals. With the help of a canonical decomposition, we find that the relevant contribution to one loop is given by the graviton quantum fluctuations around the given background. A final discussion on the connection of our result with the observed cosmological constant and the inflation problem is also reported.

*References*

[1] R. Garattini, *Distorting General Relativity: Gravity's Rainbow and theories at work*, JCAP 1306 (2013) 017 arXiv:1210.7760 [gr-qc].

[2] J. Magueijo and L. Smolin, *Gravity's Rainbow, *Class. Quant. Grav. **21**, 1725 (2004) [arXiv:gr-qc/0305055].

[3] R. Garattini and G.~Mandanici, *Modified Dispersion Relations lead to a finite Zero Point Gravitational Energy,* Phys. Rev. **D 83**, 084021 (2011); arXiv:1102.3803 [gr-qc].

I. We discuss Einstein gravity for a fluid consisting of particles interacting with an environment of some other particles. The environment is described by a time-dependent cosmological term which is compensating the lack of the conservation law of the energy-momentum of the dissipative fluid. The dissipation is approximated by a relativistic diffusion in the phase space. We are interested in a homogeneous isotropic flat expanding Universe described by a scale factor . At an early stage the particles are massless. We obtain explicit solutions of the diffusion equation for a fluid of massless particles at finite temperature. The solution is of the form of a modified Jüttner distribution with a time-dependent temperature. At later time Universe evolution is described as a diffusion at zero temperature with no equilibration. We find solutions of the diffusion equation at zero temperature which can be treated as a continuation to a later time of the finite temperature solutions describing an early stage of the Universe. The energy-momentum of the diffusing particles is defined by their phase space distribution. A conservation of the total energy-momentum determines the cosmological term up to a constant. The resulting energy-momentum inserted into Einstein equations gives a modified Friedmann equation. Solutions of the Friedmann equation depend on the initial value of the cosmological term. The large value of the cosmological constant implies an exponential expansion. If the initial conditions allow a power-like solution for a large time then it must be of the form (no deceleration, is the cosmic time) . The final stage of the Universe evolution is described by a non-relativistic diffusion of a cold dust. The model is studied in [1]. The relativistic diffusion in [2-6].

II. As a second step we discuss extensions of the model I. We discuss interactions which lead to relativistic diffusion. In particular, we show that quantized electromagnetic fields and gravitons can lead to diffusion. Such a diffusion must be included on the rhs of Einstein equations if the quantized degrees of freedom are not treated explicitly as dynamical variables (some earlier work on the effect of gravitons is in [8,9]). Next, we consider the perturbation of the homogeneous metric of the model I in order to study the role of diffusion in the spreading of inhomogeneities at an early stage of the hot universe.

*References*

[1] Z.Haba, Class.Quant.Grav. **31**, 075011(2014)

[2] Z. Haba, Journ.Phys. **A46**, 155001(2013)

[3] Z.Haba, Mod.Phys.Lett. **A28**, 1350091(2013)

[4] Z. Haba, Phys.Rev. **E79**, 021128(2009)

[5] Z. Haba, Physica, **A390**, 2776(2011)

[6] Z.Haba, Mod.Phys.Lett. **A25**, 2681(2010)

[6] Z. Haba, Class.Quant.Grav. **27**, 095021(2010)

[6] Z.Haba, Mod.Phys.Lett. **24**, 1519(2000)

[7] Z.Haba, H. Kleinert, Int.Journ.Mod.Phys. **A17**,3729(2002)

ANTARES is currently the largest operating neutrino telescope in the Northern Hemisphere, mainly sensitive to TeV neutrinos. Its main goal is the detection of high energy neutrinos from astrophysical sources, which would provide important insights about the processes powering their engines and would help understand the origin of high energy cosmic rays.

To identify unambiguously such sources, ANTARES has developped several online and offline programs to reveal possible time and/or space correlations of neutrino candidates with other cosmic messengers : photons (mainly X-rays and GeV/TeV gamma-rays through the search from Gamma-ray bursts or GeV-flaring blazars, but also in the optical domain through alert and follow-up programs), cosmic rays and gravitational wave bursts detected by the Virgo/LIGO interferometers. Some of the most relevant results of these multi-messenger analyses will be presented in this contribution.

The origin of the magnetic field in the universe is of great interest. Biermann's battery and the Weibel instability have been considered as candidates for such mechanisms [1]. The Weibel instability is believed to play an important role in the Super Nova remnant (SNR) and in the inertial fusion plasma. In this paper, we discuss the evolution of current structures and magnetic fields of the nonlinear Weibel instability up to the saturation regime, based on PIC simulation results.

*References*

[1] D. Ryu, D. Schleicher, R. Treumann, C. Tsagas, and L. Widrow, *Magnetic fields in the large-scale structure of the universe*, Space Science Reviews, 158, 35 (2011).

The cluster of galaxies in Perseus have long been considered as possible candidates for the sources of high and very high energy -ray emission generated by various mechanisms. We presented the results of fifteen-year-long observations of the central galaxy in the cluster NGC 1275 at energies 800 GeV-40 TeV discovered by the SHALON telescope in 1996 [1,2,3]. The data obtained at very high energies, namely the images of the galaxy and its surroundings, and the flux variability indicate that the TeV -ray emission is produced by a number of processes: a part of this emission is generated by relativistic jets in the nucleus of NGC 1275 itself. Whereas, the presence of an extended structure around NGC 1275 [3] is evidence of the interaction of cosmic rays and magnetic fields generated in the jets at the galactic center with the gas of the Perseus cluster.

*References*

[1] V.G. Sinitsyna, in Proc. of the Workshop, Towards a Major Atmospheric Cherenkov Detector V, Kruger Park, South Africa, Ed. by O. De Jager (Westprint, Potchefstroom, 1997), p. 136.

[2] V.G. Sinitsyna et al., Nucl. Phys. B Proc. Suppl. 196, 442 (2009).

[3] V.G. Sinitsyna and V. Yu. Sinitsyna, Astron. Lett. 40(2-3), 75 (2014); [ArXiv: astro-ph.HE1403.1740].

I will present a new class of scalar-tensor theories that extend Horndeski theories. Despite possessing equations of motion of higher order in derivatives, their true propagating degrees of freedom obey well-behaved second-order equations and are thus free from Ostrogradski instabilities, in contrast to the standard lore. Remarkably, the covariant versions of the original galileon Lagrangians obtained by direct replacement of derivatives with covariant derivatives belong to this class of theories. These extensions of Horndeski theories exhibit an uncommon, interesting phenomenology: the scalar degree of freedom affects the speed of sound of matter, even when the latter is minimally coupled to gravity.

We consider non-linear massive gravity with two Stückelberg fields [1]. In this case we find the analytic expression for the determinant of the kinetic matrix of the scalar field Lagrangian and perform the full constraint analysis. This theory admits a charged anti-de Sitter black brane solution and has recently been used in holographic context as a bulk description of a boundary field theory with momentum dissipation [2,3]. We derive the previous results for the holographic conductivity in a coordinate independent way.

*References*

[1] L. Alberte and A. Khmelnitsky, *Reduced Massive Gravity with Two Stückelberg Fields*, Phys. Rev. D **88** (2013) 064053 [arXiv:1303.4958 [hep-th]].

[2] D. Vegh, *Holography without translational symmetry*, arXiv:1301.0537 [hep-th]

[3] M. Blake and D. Tong, *Universal Resistivity from Holographic Massive Gravity*, Phys. Rev. D **88** (2013) 106004 [arXiv:1308.4970 [hep-th]].

This talk is based on [1]. We show that Padmanabhan's conjecture for the emergence of cosmic space [2] holds for the flat Friedmann-Robertson-Walker universe in Einstein gravity but does not hold for the non-flat case unless one uses the aerial volume instead of the proper volume. We also show that various works extending Padmanabhan's conjecture to non-Einstein and non-flat cases have serious shortfalls. This analysis is done using the Friedmann equation with the further assumptions of the holographic principle and the equipartition rule of energy.

*References*

[1] Ee Chang-Young and Daeho Lee, *Friedmann equation and the emergence of cosmic space*, JHEP **04** (2014) 125 [arXiv:1309.3084].

[2] T. Padmanabhan, *Emergence and expansion of cosmic space as due to the quest for holographic equipartition,* [arXiv:1206.4916].

This talk is based on papers [1,2,3]. We present the basic equations and relations for the relativistic static spherically symmetric stars (SSSS) in the model of minimal dilatonic gravity (MDG) which is locally equivalent to the theories of gravity and gives an alternative description of the effects of dark matter and dark energy. The results for the simplest form of the relativistic equation of state (EOS) of neutron matter are represented. Our approach overcomes the well-known difficulties of the physics of SSSS in the theories of gravity introducing two novel EOS for cosmological energy-pressure densities and dilaton energy-pressure densities, proper boundary conditions, as well as justifying the domain of SSSS in the phase space of the generalized Tolman-Openheimer-Volkov equations. It is shown that dark energy and dark matter play an essential role in the structure of SSSS due to natural chameleon like effect of MDG.

*References*

[1] P. Fiziev and D. Georgieva, *Inflation and oscillations of the Universe in 4D dilatonic gravity*, Phys. Rev. D **67**, 064016 (2003).

[2] P. P. Fiziev , *Withholding potentials, absence of ghosts, and relationship between minimal dilatonic gravity and f(R) theories*, Phys. Rev. D **87**, 044053 (2013).

[3] P. P. Fiziev, *Compact static stars in minimal dilatonic gravity*, arXiv:1402.2813.

I review some of the most important recent results from finite temperature lattice QCD based on the contributions of the Budapest-Wuppertal collaboration. In particular I'll cover the equation of state [1], the role of the charm quark, and the fluctuation observables also measured in heavy ion physics [2]. I will comment on the the reliability of the hadron resonance gas model for low temperatures and the hard thermal loop approximation at high temperatures [3].

*References*

[1]
S. Borsanyi, Z. Fodor, C. Hoelbling, S. D. Katz, S. Krieg and K. K. Szabo,
%“Full result for the QCD equation of state with 2+1 flavors,”
Phys.Lett.B **730** (2014) 99
[arXiv:1309.5258 [hep-lat]].

[2] S. Borsanyi, Z. Fodor, S. D. Katz, S. Krieg, C. Ratti and K. K. Szabo, %“Freeze-out parameters from electric charge and baryon number fluctuations: is there consistency?,” arXiv:1403.4576 [hep-lat].

[3]
N. Haque, A. Bandyopadhyay, J. O. Andersen, M. G. Mustafa, M. Strickland and N. Su,
%“Three-loop HTLpt thermodynamics at finite temperature and chemical potential,”
JHEP **1405** (2014) 027
[arXiv:1402.6907 [hep-ph]].

ALICE studies the properties of the strongly interacting matter at the extreme energy densities reached in heavy-ion collisions at the LHC. In this context, the measurements in proton-proton and proton-nucleon collisions are mandatory: the former sets the reference, while the latter provides further insight into the effects due to cold nuclear matter, which is crucial in the understanding of heavy-ion collisions. In this talk, a selection of the recent results in p–Pb collisions at the LHC will be presented.

A canonical quantum formalism for discrete systems subject to a discretization changing dynamics is outlined. This framework enables one to systematically study (non-)unitarity of such dynamics, the role of canonical constraints and the fate of Dirac observables on temporally varying discretizations. It will be illustrated how the formalism can also be employed to generate a vacuum for a scalar field on an evolving lattice. Implications for the dynamics in simplicial quantum gravity models are commented on.

*References*

[1] P. A. Höhn, “Quantization of systems with temporally varying discretization I: Evolving Hilbert spaces” arXiv:1401.6062 [gr-qc].

[2] P. A. Höhn, “Quantization of systems with temporally varying discretization II: Local evolution moves” arXiv:1401.7731 [gr-qc].

[3] B. Dittrich and P. A. Höhn, “Constraint analysis for variational discrete systems” J.Math. Phys.54, **093505** (2013) [arXiv:1303.4294 [math-ph]].

[4] B. Dittrich, P. A. Höhn and T. Jacobson, *to appear*

I will present a construction of observables following naturally from an introduction of an observer into a relativistic theory. The observables are invariant under a large class of spatial diffeomorphisms. I will discuss the description of the theory in terms of spatial-diffeomorphism-invariant degrees of freedom. The talk will be based on [1].

*References*

[1] P. Duch, W. Kaminski, J. Lewandowski and J. Swieżewski, *Observables for General Relativity related to geometry*, arXiv:1403.8062 [gr-qc].

This talk is a presentation of an ongoing work based on [1] of the same authors. In this work we study properties of the Lorentzian Hamiltonian constraint operator expressed using the curvature operator introduced in [1] and we evaluate its action.

*References*

[1] E. Alesci, M. Assanioussi and J. Lewandowski, *A curvature operator for LQG*, Submitted for publication in Phys. Rev. D [arxiv: 1403.3190v2].

In this talk, I will report on joint work with Rainer Verch [1] on hyperbolic PDEs with non-commutative time, i.e. linear integro-differential equations of the form , where is a (pre-)normal hyperbolic differential operator on , is a coupling constant, and a regular integral operator which is non-local in time, so that a Hamiltonian formulation is not possible. Such equations appear in the context of wave or Dirac equations on non-commutative deformations of Minkowski space. It will be discussed that at small coupling, the hyperbolic character of is essentially preserved, unique advanced/retarded fundamental solutions can be constructed, and the acausal behavior of the solutions is well-controlled. Although the Cauchy problem is ill-posed in general, a scattering operator can be calculated which describes the effect of on the space of solutions of .

It is also described how these results can be used for the analysis of classical and quantum field theories on non-commutative spaces.

*References*

[1] G. Lechner and R. Verch, *Linear hyperbolic PDEs with non-commutative time*, Preprint, arXiv:1307.1780