- Indico style
- Indico style - inline minutes
- Indico style - numbered
- Indico style - numbered + minutes
- Indico Weeks View
The Szekeres exact solution to the field equations of General Relativity is known as a promising tool for the representation of the late universe. Besides being devoid of any symmetry, being able to represent the matter (cosmological constant) dominated region of the universe, to be matched at some redshift to an homogeneous (early-time) FLRW space-time, and to exhibit a matter dipole, it can be used to reproduce the expansion multipoles which have been recently measured in supernova, quasar and radio-galaxy surveys. After a short reminder of the main properties of this cosmological model, the latter feature will be described and methods for its implementation will be proposed.
Primordial black holes could constitute part or all of dark matter but they require large inhomogeneities to form in the early universe. These inhomogeneities can strongly backreact on the large scale dynamics of the universe. Stochastic inflation provides a way of studying this backreaction and getting an estimation of the abundance of primordial black holes. Because stochastic inflation focuses on large scale dynamics, it rests on the separate universe approach. However, the validity of this approach has only been checked in single field models, but not in multifield models in which we expect strong boosts in the power spectrum, leading to the formation of primordial black holes. We will check the validity of a separate universe approach in multifield models by matching it with a complete cosmological perturbation theory approach at large scales. In particular, we wish to compare these two paradigms and their differences in the adiabatic and entropic directions of the phase space. This will give us a range of validity and conditions one needs to verify in order to apply the separate universe approach and stochastic inflation in multifield models. We will then focus on gauge fixing in these two paradigms and check when the matching still holds.
The axion is a hypothetical particle that was first proposed to solve an open problem in QCD known as the "strong CP problem". It was later realized that the axion has implications in cosmology as a dark matter candidate. In this talk however, we concentrate on the ultra-relativistic component of the axion population, produced in the early Universe by interacting with the Standard Model (SM) plasma. This population can contribute measurably to the energy density of the Universe as dark radiation with the quantity of phenomenological interest here being the so-called "effective number of neutrinos" Neff, which can be determined experimentally via CMB telescopes like Planck or the recently launched Simons Observatory. The main motivation for our research is to evaluate Neff as a function of the SM-axion couplings as precisely as possible, so that experimental results may be used to place constraints on the values of those parameters. Specifically, this talk focuses on new results obtained in the Kim-Shifman-Vainshtein-Zakharov (KSVZ) model where the axion couples only to gluons and the only parameter is the axion scale. The most important part of any computation of thermal production is the implementation of the collective effects of the medium which cure would-be divergences caused by soft particle exchange. Here, I present two new schemes for accounting for such thermal effects and compare them to ones already present in the literature. Those new schemes solve issues of production rate negativity and gauge dependence that appeared in previous computations. Finally, I show how the various computation schemes lead to different behaviors of the production rate at soft axion momenta. Once the production rate has been obtained, the axion contribution to Neff can be computed by solving the Boltzmann equation. The difference between the values of Neff obtained in different schemes allow us to gain an understanding of the theory uncertainty of the computation. As an outlook, I will also touch on the automated techniques that were used in this research, and how they could be extended to automate the entire production rate computation. The results presented in this talk are published in 2404.06113, written by myself and Jacopo Ghiglieri.
The standard Lambda Cold Dark Matter (ΛCDM) cosmological model has proven remarkably successful in describing a broad range of observational data, ranging from the cosmic microwave background (CMB) radiation to the large-scale structure of the Universe. However, recent advances in precision cosmology have revealed persistent statistical discrepancies between independent data sets and observational methods. One prominent example is the "Hubble tension," which refers to the irreconcilable predictions of the present expansion rate of the Universe when inferred from early-Universe measurements (such as the CMB) compared to local observations. Low-redshift observables like Baryon Acoustic Oscillations (BAO) and Type Ia Supernovae (SN1a) are used to build the cosmological distance ladder, which relies on calibrations using either early- or late-Universe data. Therefore, the Hubble tension is also reflected in the incompatibility between these distances and how they are calibrated. However, this comparison assumes that the distance-duality relationship (DDR) holds and can be used to compare measurements of the luminosity and angular diameter distances. In this talk, we will examine the implications of relaxing this assumption to more general relations, its implications to the current cosmic tensions and how it could potentially explain the apparent need for the introduction of new physics to address current cosmic tensions
Apart from its manifest interest in the understanding of the first moments of the universe, the framework of cosmic inflation is also the best way we know to probe fundamental physics at very high energies. In particular, the spontaneous production of massive particles due to the expanding background can leave potentially visible imprints in cosmological correlation functions known as the cosmological collider signal. Within the effective field theory of inflation (EFTI), it is possible to treat these exchange processes in a model-independent way, and explicit computations taking advantage of the conformal invariance of late-time observables have been carried out using various techniques such as the cosmological bootstrap. More recently, the full parameter space allowed by the EFTI has been explored allowing for boost-breaking setups leading to more striking phenomenological signatures, and the recently developed cosmological flow approach numerically gives us access to any correlation function. In this talk, I will expose a treatment of a parameter space region that remains analytically unknown: the strong mixing regime where the inflaton field and the massive particle can experience an infinite number of flavor transformations during the process. I will describe ongoing efforts to describe this regime based on extensions of standard single-field effective field theory techniques.
Gravitational waves enable independent measurements of the Hubble constant through luminosity distance estimates combined with external redshift data. Current measurements are dominated by the single “bright siren” GW170817. As soon as new data points become available (which may already be the case during the ongoing observing run of the LIGO-Virgo-KAGRA collaboration), systematic biases—particularly electromagnetic (EM) selection effects—could significantly impact these measurements, especially when relying on short gamma-ray bursts (GRBs) as counterparts. Since GRBs are only observed for binaries whose orbital angular momentum is aligned with our line of sight, this introduces a bias in the distance estimate due to the correlation between distance and inclination.
I will discuss a novel approach to removing this bias by determining the electromagnetic detection probability directly from the observed sample of GW-GRB events, without requiring additional external information. I will argue that ignoring this bias could shift the inferred value of the Hubble constant by about 10% with just two events and become statistically significant with more detections, highlighting the importance of incorporating this correction in ongoing and future gravitational-wave observations.
Understanding dark matter and dark energy, which make up 95% of the Universe, requires cosmological surveys to achieve percent-level precision. Yet, this precision reveals tensions between observations and the standard cosmological model, potentially stemming from systematic biases. CLONES (Constrained LOcal & Nesting Environment Simulations) are digital twins of the local Universe designed to replicate our cosmic environment and tackle these challenges. Highlighting key cosmological tensions and showcasing an example study of these CLONES will demonstrate how they can offer a powerful framework for bias-free analyses, advancing our understanding of galaxy evolution and large-scale structure formation.
The dynamics of ultralight dark matter with non-negligible self-interactions are determined by a Gross-Pitaevskii equation rather than by the Vlasov equation of collisionless particles. This leads to wave-like effects, such as interferences, the formation of solitons, and a velocity field that is locally curl-free, implying that vorticity is carried by singularities associated with vortices. Using analytical derivations and numerical simulations, we study the evolution of such a system from stochastic initial conditions with nonzero angular momentum. Focusing on the Thomas-Fermi regime, where the de Broglie wavelength of the system is smaller than its size, we show that a rotating soliton forms in a few dynamical times. The rotation is associated with a regular lattice of vortices that gives rise to a solid-body rotation in the continuum limit. We show that this configuration is a stable minimum of the energy at fixed angular momentum and we check that the numerical results agree with the analytical derivations.
The next generation of CMB experiments such as CMB-S4 will achieve unmatched precision in polarization. At this level of accuracy, systematic errors from polarization angle misscalibration could become the dominant source of uncertainty in the EB power spectrum, making it more difficult to constrain parity-violating physics such as cosmic birefringence. A method has been proposed by Y. Minami & E. Komatsu to measure both the misscalibration angles and the birefringence angle using galactic foreground emission. We aim to identify the important factors for achieving the best calibration of polarization angles possible using this method, particularly in the context of future CMB experiments.
Generative AI is about to lead to a paradigm shift in the way we conduct research, with tremendous potential gains in productivity, robustness and discovery. We are developing multi-agent systems to automate and optimize scientific workflows in data analysis, with a focus on Cosmology. In this talk, we will present our prototype cmbagent, a pre-trained-LLM-based system capable of reproducing complex and state-of-the-art cosmological data analyses with minimal human input, built with ag2. Such systems offer a glimpse into the future of research, where the main task for human scientists will be the orchestration of self-driving laboratories.
The distance duality relation (DDR) relates two important cosmological distances, namely the angular diameter distance and the luminosity distance. These can be measured by baryon acoustic oscillations (BAO) and Type Ia Supernovae, respectively. Here, we use recent DESI 2024 and Pantheon+SH0ES data to test this fundamental relation. We employ a parametrised approach and also use model-independent Generic Algorithms (GA) which are a machine learning method where functions evolve, loosely based on biological evolution. The data are used in two different ways, one using the Pantheon+ data without cepheid calibration. In this first case, our result is 2σ apart from the DDR in the parametrised approach and has no deviation in the GA approach. In a second step, we add the big bang nucleosynthesis (BBN) value for the baryon density ωb and calibrate the Pantheon+ data with cepheids from the SH0ES survey. This case reflects the Hubble tension since both data sets are in tension in the standard cosmological model ΛCDM. Here, we find a significant violation of the DDR in the parametrised case at 6σ. For the model-independent approach, we test this tension by adding Planck CMB data to calculate the sound horizon instead of only the BBN value. We find a much larger deviation than in the uncalibrated GA case while the violation remains at 1σ.
QUBIC (QU Bolometric Interferometer for Cosmology) is an instrument dedicated to the search for B-modes by measuring the Q and U polarization modes. It brings together the advantages of bolometers with their high sensitivity and interferometers with their exquisite control of instrument systematic effects. The interferometric nature of QUBIC also allows spectral-imaging with high spectral resolution compared to direct imagers, which is a significant advantage for foreground removal. After discussing the instrument details of QUBIC, focusing on its ability to mix spatial and spectral information, I will present two methods using this unique feature to perform component separation. The first one, called Frequency Map-Making (FMM, https://arxiv.org/abs/2409.18698), is a standard implementation of spectral imaging with QUBIC where the large bandwidth data is projected onto frequency sub-bands within the physical bandwidth of the instrument. The higher spectral resolution enhances foreground mitigation in the case of complex foregrounds (frequency decorrelation). The second method, called Component Map-Making (CMM, https://arxiv.org/abs/2409.18714), is an even more advanced method where foreground mitigation is performed along with the map-making, making extensive use of the spectral imaging capabilities of QUBIC, leading to further improvements with respect to FMM.
Galaxy clusters are a powerful cosmological probe: they track the most recent evolution of large scale structure and therefore are fundamental for testing the cosmological model in the recent Universe. To compare the observations of galaxy clusters with theoretical predictions and thus constrain the cosmological parameters of the underlying model, precise knowledge of cluster masses and redshifts is required. Assuming hydrostatic equilibrium, cluster masses can be inferred from X-ray observations for a subset of the Planck cosmological cluster sample. Using scaling relations, hydrostatic masses can be computed for the full sample from the
Type Ia supernovae (SNe Ia) are well-known distance indicators. Through the distance measured from SNe Ia, it is possible to recover their host galaxy's peculiar velocities (PVs). The PV field measured by SNe Ia enables us to constrain the growth rate of cosmic structure and, in turn, test General Relativity and different dark energy models. Using a realistic simulation of SNe light curves, as expected from the LSST survey, we have analyzed the bias due to selection effects and contamination from core-collapse SNe. Utilizing the Maximum Likelihood method, we recovered the growth rate constraints from LSST SN Ia PVs. We produced forecasts for LSST depending on the survey observing time. We find that LSST can constrain the growth rate with 10% precision in the redshift range 0.02<z<0.14.
I will present my work regarding the new non-Gaussianity constraints we obtained in the first bispectrum analysis of the Planck Release 4 data, improved with respect to the previous release. I will then move on to the future prospects for non-Gaussianity regarding the LiteBIRD mission. Finally I will talk about the use of non-Gaussian statistics in component separation methods that work in harmonic space like SMICA.
We examine whether the MOND theory can reproduce the decline observed in the Milky Way’s rotation curve using Gaia data. MOND, which modifies Newtonian dynamics at very low accelerations (a₀ ≈ 1.2 × 10⁻¹⁰ m/s²), successfully explains the flat rotation curves of galaxies but struggles to account for this decline. A model based on a dark matter halo (NFW) with a scale radius of 4 kpc fits the decline well, whereas MOND fails with a standard baryonic model. By adjusting the parameters of the stellar and HI disk using an MCMC analysis, a good fit can be achieved, but this requires a very massive stellar disk (~10¹¹ M⊙) and a much lower value of a₀ than typically assumed, calling into question the effectiveness of MOND in this specific case.