## Archive for the ‘High Energy Physics’ Category

In High Energy Physics on December 10, 2012 by physthjc

**1) The Cosmological Constant Problem**

**Scrutinizing the Cosmological Constant Problem and a possible resolution**

*abstract:* We suggest a new perspective on the Cosmological Constant Problem by scrutinizing its standard formulation. In classical and quantum mechanics without gravity, there is no definition of the zero point of energy. Furthermore, the Casimir effect only measures how the vacuum energy changes as one varies a geometric modulus. This leads us to propose that the physical vacuum energy in a Friedman-Lemaitre-Robertson-Walker expanding universe only depends on the time variation of the scale factor a(t). Equivalently, requiring that empty Minkowski space is stable is a principle that fixes the ambiguity in the zero point energy. We describe two different choices of vacuum, one of which is consistent with the current universe consisting only of matter and vacuum energy. The resulting vacuum energy density is proportional to (k_c H_0)^2, where k_c is a momentum cut-off and H_0 is the Hubble constant; for a cut-off close to the Planck scale, values of the vacuum energy density in agreement with astrophysical measurements are obtained. Another choice of vacuum is more relevant to the early universe consisting of only radiation and vacuum energy, and we suggest it as a possible model of inflation.

**2) Hidden-photon signal in the radio regime**

**Astrophysical searches for a hidden-photon signal in the radio regime **

*abstract: *Common extensions of the Standard Model of particle physics predict the existence of a “hidden” sector that comprises particles with a vanishing or very weak coupling to particles of the Standard Model (visible sector). For very light (m < 10^-14 eV) hidden U(1) gauge bosons (hidden photons), broad-band radio spectra of compact radio sources could be modified due to weak kinetic mixing with radio photons. Here, search methods are developed and their sensitivity discussed, with specific emphasis on the effect of the coherence length of the signal, instrumental bandwidth, and spectral resolution. We conclude that radio observations in the frequency range of 0.03–1400 GHz probe kinetic mixing of ~10^-3 of hidden photons with masses down to ~10^-17 eV. Prospects for improving the sensitivity with future radio astronomical facilities as well as by stacking data from multiple objects are discussed.

In High Energy Physics on November 19, 2012 by physthjc

**1) The Universe as a simulation**

**Constraints on the Universe as a Numerical simulation**

*abstract: *Observable consequences of the hypothesis that the observed universe is a numerical simulation performed on a cubic space-time lattice or grid are explored. The simulation scenario is first motivated by extrapolating current trends in computational resource requirements for lattice QCD into the future. Using the historical development of lattice gauge theory technology as a guide, we assume that our universe is an early numerical simulation with unimproved Wilson fermion discretization and investigate potentially-observable consequences. Among the observables that are considered are the muon g-2 and the current differences between determinations of alpha, but the most stringent bound on the inverse lattice spacing of the universe, b^(-1) >~ 10^(11) GeV, is derived from the high-energy cut off of the cosmic ray spectrum. The numerical simulation scenario could reveal itself in the distributions of the highest energy cosmic rays exhibiting a degree of rotational symmetry breaking that reflects the structure of the underlying lattice.

**2) Gamma rays**

**Constraining Extended Gamma-ray Emission from Galaxy Clusters**

*abstract: *Cold dark matter models predict the existence of a large number of substructures within dark matter halos. If the cold dark matter consists of weakly interacting massive particles, their annihilation within these substructures could lead to diffuse GeV emission that would dominate over the annihilation signal of the host halo. In this work we search for GeV emission from three nearby galaxy clusters: Coma, Virgo and Fornax. We first remove known extragalactic and galactic diffuse gamma-ray backgrounds and point sources from the Fermi 2-year catalog and find a significant residual diffuse emission in all three clusters. We then investigate whether this emission is due to (i) unresolved point sources; (ii) dark matter annihilation; or (iii) cosmic rays (CR). Using 45 months of Fermi-LAT data we detect several new point sources (not present in the Fermi 2-year point source catalogue) which contaminate the signal previously analyzed by Han et al.(arxiv:1201.1003). Including these and accounting for the effects of undetected point sources, we find no significant detection of extended emission from the three clusters studied. Instead, we determine upper limits on emission due to dark matter annihilation and cosmic rays. For Fornax and Virgo the limits on CR emission are consistent with theoretical models, but for Coma the upper limit is a factor of 2 below the theoretical expectation. Allowing for systematic uncertainties associated with the treatment of CR, the upper limits on the cross section for dark matter annihilation from our clusters are more stringent than those from analyses of dwarf galaxies in the Milky Way. We rule out the thermal cross section for supersymmetric dark matter particles for masses as large as 100 GeV (depending on the annihilation channel).

**
**

In High Energy Physics on October 16, 2012 by physthjc

**1) Precision Electroweak Data**

**Higgs Couplings and Precision Electroweak Data**

*abstract:*In light of the discovery of a Higgs-like particle at the LHC, we revisit the status of the precision electroweak data, focusing on two discrepant observables: 1) the long-standing 2.4 sigma deviation in the forward-backward asymmetry of the bottom quark A_{FB}^b, and 2) the 2.3 sigma deviation in R_b, the ratio of the Z \rightarrow b \bar b partial width to the inclusive hadronic width, which is now in tension after a recent calculation including new two-loop electroweak corrections. We consider possible resolutions of these discrepancies. Taking the data at face value, the most compelling scenario is that new physics directly affects A_{FB}^b and R_b, bringing the prediction into accord with the measured values. We propose a modified `Beautiful Mirrors’ scenario which contains new vector-like quarks that mix with the b quark, modifying the Z b\bar b vertex and thus correcting A_{FB}^b and R_b. We show that this scenario can lead to modifications to the production rates of the Higgs boson in certain channels, and in particular a sizable enhancement in the diphoton channel. We also describe additional collider tests of this scenario.

In High Energy Physics on October 16, 2012 by physthjc

**1) Dark Matter**

**Constraints on Primordial Black Holes as Dark Matter Candidates from Star Formation**

*abstract:*By considering adiabatic contraction of the dark matter (DM) during the star formation, we estimate the amount of DM trapped in stars at their birth in different astrophysical environments. If the DM consists partly of primordial black holes (PBHs), they will be trapped together with the rest of the DM and will be finally inherited by a star compact remnant — a white dwarf (WD) or a neutron star (NS), which they will destroy in a short time. Observations of WDs and NSs thus impose constraints on the abundance of PBH. We show that the best constraints come from WDs and NSs in globular clusters which exclude the DM consisting entirely of PBH in the mass range $10^{16}{\rm g} – 10^{26}{\rm g}$, the strongest constraint on the fraction $\Omega_{\rm PBH} /\Omega_{DM}\lesssim 10^{-5}$ being in the range of PBH masses $10^{17}{\rm g} – 10^{18}$ g.

In High Energy Physics on October 16, 2012 by physthjc

**1) Neutrino Physics**

**Muon conversion to electron in nuclei in type-I seesaw models**

*abstract: *We compute the muon to electron conversion in the type-I seesaw model, as a function of the right-handed neutrino mixings and masses. The results are compared with previous computations in the literature. We determine the definite predictions resulting for the ratios between the muon to electron conversion rate for a given nucleus and the rate of two other processes which also involve a mu-e flavour transition: mu -> e gamma and mu -> eee. For a quasi-degenerate mass spectrum of right-handed neutrino masses -which is the most natural scenario leading to observable rates- those ratios depend only on the seesaw mass scale, offering a quite interesting testing ground. In the case of sterile neutrinos heavier than the electroweak scale, these ratios vanish typically for a mass scale of order a few TeV. Furthermore, the analysis performed here is also valid down to very light masses. It turns out that planned mu -> e conversion experiments would be sensitive to masses as low as 2 MeV. Taking into account other experimental constraints, we show that future mu -> e conversion experiments will be fully relevant to detect or constrain sterile neutrino scenarios in the 2 GeV-1000 TeV mass range.

In High Energy Physics on June 4, 2012 by physthjc

**1) Local Dark Matter**

**On the Local dark matter**

*abstract: * An analysis of the kinematics of 412 stars at 1-4 kpc from the Galactic mid-plane by Moni Bidin et al. (2012) has claimed to derive a local density of dark matter that is an order of magnitude below standard expectations. We show that this result is incorrect and that it arises from the invalid assumption that the mean azimuthal velocity of the stellar tracers is independent of Galactocentric radius at all heights; the correct assumption—that is, the one supported by data—is that the circular speed is independent of radius in the mid-plane. We demonstrate that the assumption of constant mean azimuthal velocity is physically implausible by showing that it requires the circular velocity to drop more steeply than allowed by any plausible mass model, with or without dark matter, at large heights above the mid-plane. Using the correct approximation that the circular velocity curve is flat in the mid-plane, we find that the data imply a local dark-matter density of 0.008 +/- 0.002 Msun/pc^3= 0.3 +/- 0.1 Gev/cm^3, fully consistent with standard estimates of this quantity. This is the most robust direct measurement of the local dark-matter density to date.

**2) Pseudo-scalar state of a two-Higgs doublet**

**Is the LHC observing the Pseudo-Scalar state of a two-Higgs doublet model?**

*abstract: * The ATLAS and CMS collaborations have recently shown data suggesting the presence of a Higgs boson in the vicinity of 125 GeV. We show that a two-Higgs doublet model spectrum, with the pseudo-scalar state being the lightest, could be responsible for the diphoton signal events. In this model, the other scalars are considerably heavier and are not excluded by the current LHC data. If this assumption is correct, future LHC data should show a strengthening of the $\gamma\gamma$ signal, while the signals in the $ZZ^{(*)}\to 4\ell $ and $WW^{(*)}\to 2\ell 2\nu$ channels should diminish and eventually disappear, due to the absence of diboson tree-level couplings of the CP-odd state. The heavier CP-even neutral scalars can now decay into channels involving the CP-odd light scalar which, together with their larger masses, allow them to avoid the existing bounds on Higgs searches. We suggest additional signals to confirm this scenario at the LHC, in the decay channels of the heavier scalars into $AA$ and $AZ$. Finally, this inverted two-Higgs doublet spectrum is characteristic in models where fermion condensation leads to electroweak symmetry breaking. We show that in these theories it is possible to obtain the observed diphoton signal at or somewhat above of the prediction for the standard model Higgs for the typical values of the parameters predicted.

In High Energy Physics on April 5, 2012 by physthjc

**1) Asymptotic Darkness**

**On Asymptotic Darkness in Horava-Lifshitz GRavity**

*abstract: * Ho\v{r}ava-Lifshitz gravity is shown to exhibit a peculiar behavior, when scattering amplitudes are considered. At low energies it seems to classicalize i.e. the effective size of the interaction grows as a function of the $s$-parameter, with BHs forming part of the spectrum; but when the probing energy is increased such that higher order operators become important, this behavior changes and the classicalon recedes to a new configuration where ordinary quantum regimes take over. Eventually, the theory behaves as a usual field theory that allows probing arbitrarily small distances. In particular, the classical potential created by a point-like source is finite everywhere exhibiting a Vainshtein alike screening behavior. The transition from one behavior to the other is carefully described in a particular case.

**2) Asymmetric Dark Matter**

**Closing in on asymmetric dark matter I: Model independent limits for interaction with quarks **

*abstract: * It is argued that experimental constraints on theories of asymmetric dark matter (ADM) almost certainly require that the DM be part of a richer hidden sector of interacting states of comparable mass or lighter. A general requisite of models of ADM is that the vast majority of the symmetric component of the DM number density must be removed in order to explain the observed relationship $\Omega_B\sim\Omega_{DM}$ via the DM asymmetry. Demanding the efficient annihilation of the symmetric component leads to a tension with experimental limits if the annihilation is directly to Standard Model (SM) degrees of freedom. A comprehensive effective operator analysis of the model independent constraints on ADM from direct detection experiments and LHC monojet searches is presented. Notably, the limits obtained essentially exclude models of ADM with mass 1GeV$\lesssim m_{DM} \lesssim$ 100GeV annihilating to SM quarks via heavy mediator states. This motivates the study of portal interactions between the dark and SM sectors mediated by light states. Resonances and threshold effects involving the new light states are shown to be important for determining the exclusion limits.