CosmoSim Blog

We had some active developments for our UWS-client for Scripted database access.
One of the changes to make it more standard compliant was to add ‘/phase’ to the URI when starting a job with ‘phase=run’.
We adjusted our Daiquiri instance to answer these requests correctly now, but requests sent to the URI without ‘/phase’ will also still work to ensure backwards compatibility.

« Previous PageNext Page »

The Rockstar catalogue for SMDPL is public now.
We’ve also added indexes for upId, the link to the top-most host halo, which accelerates subhalo-queries a lot. I.e. it’s now faster to retrieve all subhalos for a given host halo with a query like this:

SELECT rockstarId, x, y, z, Rvir, Mvir FROM SMDPL.Rockstar 
WHERE snapnum=116 AND upId=12067965493

This query retrieves all halos with the same upId and thus all halos with the same host halo. The chosen host halo in this case is the most massive halo at redshift 0, snapnum=116 for this simulation.
The index for upId was also added to MDPL2.Rockstar. For MDR1 this was not necessary, since that dataset is much smaller and queries in general faster.

« Previous PageNext Page »

We have now Rockstar-catalogues available for two simulations: MDR1 and MDPL2. These catalogues contain dark matter halos along with their consistent merging tree information.
This can be used to track e.g. the mass accretion history of dark matter halos, the number of mergers in each time step and much more. They can also be used to add galaxies via semi-analytics later on. We may publish one or two of these semi-analytical-model-catalogues in the future.

For more background information on Rockstar, please visit our Rockstar documentation page and Rockstar table description and look into the given references.

Here’s an example query to get all progenitors of a selected halo which are more massive than 10^11 Msun/h.

SELECT p.x, p.y, p.z, p.Mvir, p.scale, p.rockstar_snapnum FROM MDPL2.Rockstar AS p, 
(SELECT depthFirstId, lastProg_depthFirstId FROM MDPL2.Rockstar WHERE rockstarId=12657871796) AS r 
WHERE p.depthFirstId BETWEEN r.depthFirstId AND r.lastProg_depthFirstId 
AND Mvir > 1.e11
ORDER BY p.snapnum

Plotted with Topcat, the spatial distribution of these progenitors looks like this:

Position of progenitors for a halo in 3D. The colors represent the scale factor, from early times (dark blue) until today (redshift 0, red).

Position of progenitors for a halo in 3D. The colors represent the scale factor, from early times (dark blue) until today (redshift 0, red).

« Previous PageNext Page »

Thanks to one of our CosmoSim users we realized that there is a bug in UWS: whenever one sent queries on the user-table via UWS (scripted access), then the result returned was 0 rows. Always.
Luckily Jochen discovered the problem and fixed this bug! Queries including your own user tables should work now as expected.

If you happen to realize something strange going on that is not covered in this blog (see Known errors) or in the documentation, please send us a message via the Contact form. We need your feedback to improve this service!

« Previous PageNext Page »

Getting the mass function of dark matter halos (and galaxies) is now easier than ever: we’ve added a “Mass function form” especially made to help you with this. You can find it at the Query interface on the left side, just below “SQL query”:

The new "Mass function query" form for calculating the mass functions for halos from different simulations and catalogues for different timesteps.

The new “Mass function query” form for calculating the mass functions for halos from different simulations and catalogues for different timesteps.

After selecting your desired simulation and (halo) catalogue, submit your query with the Submit button.
If your query times out (very likely for non-standard mass columns), then please select Long queue before submitting.

The form takes care of formulating the correct SQL query based on your selection. You can review the submitted query anytime in your job details.

You can check the available redshifts and snapshot numbers for each simulation and catalogue by querying the corresponding AvailHalos-table. The form already takes this into account by choosing the closest matching redshift from all available redshifts for the selected catalogue.

When your job is finished, you can see in the Results Table tab a table with the logarithmic mass at the center of your mass bin and the (logarithmic) number of halos for each bin. Want to have a quick plot of your results as well? Just use the Plot tab at the query interface or send your table to Topcat via SAMP. Sending multiple tables to Topcat will allow you to directly compare mass functions of different simulations and for different redshifts.

Mass functions for different simulations at redshift z = 0, extracted using the Mass function query form and plotted with Topcat

Mass functions for different simulations at redshift z = 0, extracted using the Mass function query form and plotted with Topcat

« Previous PageNext Page »

Today I’ve renamed the density tables for MDR1 in the following way:
Dens512 => Dens512_z0
Dens1024 => Dens1024_z0.
This was done in order to have names matching the density tables for the Bolshoi simulation.
Now all density tables that end with _z0 were produced using the same algorithm by Jaime Forero Romero.
The density tables without this ending contain density information for more than one redshift and were produced by Anatoly Klypin, using a cloud-in-cell-smoothing on a 1024^3 grid, i.e. smoothing scale is one cell, ~ 244 kpc/h for Bolshoi. In order to produce densities on coarser grids for saving disk space and having shorter query time, the average density of adjacent grid cells was used.

« Previous PageNext Page »

We updated now the web interface of CosmoSim to the latest Daiquiri version. There are only small differences, mainly restructuring the layout of the query interface, and fixing some minor bugs. The Query Form documentation is also already updated, so this should give you a good overview where to find what.

The main changes include:

  • New SQL query: At the Query interface, the ‘SQL query’ tab is now replaced by a link above the job list. If you browse your results and want to go back to enter a new query, click on this link or on the ‘Query’ link at the top menu of the page.
  • Job overview: The parameters given here are more structured now. The main parameters come first, followed by the Remove/Rename job links. The query plan comes last, since usually users won’t look into this anyway.
  • SAMP: The link for SAMP-connection has moved to a new SAMP-tab. Since we also made the switch from http to https, browser’s may have difficulties when executing the SAMP-script. Just follow the instructions given at the SAMP-tab of the query interface to confirm the security exception.
  • Examples, Database and Function browser: Links to open the database browser and list of examples are now placed on top, above the query form. We also added a function browser to see the available keywords and functions that you can use in your SQL query.
Screenshot-Query-JobDetails

Query interface with the (9) Job Overview for a finished job. It shows the (10) executed query, (11) job parameters and timing, (12) buttons for renaming or deleting a job.
At the top one can also switch to the (8) Results Table tab, (13) SAMP tab for exchange of data, use the simple Plot feature or Download the result table. Also see the updated Query Form documentation

« Previous PageNext Page »

Today we release two long awaited simulations: MultiDark Planck 2 (MDPL2) and the Small MultiDark Planck simulation (SMDPL).

MDPL2 is similar to MDPL in box size and particle resolution, but with a different random seed. We will add more data products for this simulation in the future. There are also more snapshots available, which makes this data set better suited for tracking the merging history of halos.

SMDPL has a smaller box size (400 Mpc/h side length), and comes with the same particle resolution (3840^3), which gives a very good mass resolution. Though its box size is larger than for the Bolshoi simulation, its mass resolution is still better by a factor of 1.4.

SMDPL image

Slice through the Small MultiDark Planck simulation (SMDPL) at redshift z = 0.51
more

Distribution of FOF halos from the MDPL2 simulation at z=0.

Distribution of FOF halos from a slice of the MDPL2 simulation at z=0. The size and color of regions indicates halo mass and projected density.
more

For both simulations we publish the usual FOF-catalogues. We are also working on making Rockstar-catalogues available. If you want to have a sneak preview into the Rockstar tables, please send a mail via the contact form and we will add you to our test users. This gives you access to the Rockstar data that is already there but not yet published for everyone.

« Previous PageNext Page »

Last week we had out first teacher workshop with CosmoSim!
In the course of a teacher-training at Potsdam, Germany, called “100 Jahre Allgemeine Relativit√§tstheorie – Status und Ausblick” funded by the Heraeus foundation, we offered a hands-on session with CosmoSim. We guided the participants through querying density fields from the Bolshoi simulation for several time steps to show the process of evolving structure formation. It is also possible to overplot the positions of dark matter halos or extract the particle distribution of individual halos.
The tutorial is available here Tutorial, further material (including all the data files) is available here: Materials.
Since all the data is publicly available, even students and pupils can already get their hands on the latest data in cosmology!

For this workshop, we created special workshop-accounts, so that each participant can work on his/her own data set without having to register beforehand. If you want to have a set of > 10 accounts for your own workshop, please contact us and we’ll be happy to help.

« Previous PageNext Page »

From time to time you may encounter some errors or issues with our CosmoSim database. Some of them are expected behaviour, others are server- or software-related issues which won’t be fixed soon.
So here’s a list of things that may go awry and what to do in these cases.

Job stops with error: Table '...' doesn't exist

You tried to query a table that does not exist or to which you don’t have access.
Check for typos in your query and make sure that you are logged in – most tables are only accessible for logged in users (register here (it’s free!) if you haven’t done so yet).

Job (query plan) cannot be submitted, error: Table '...' already exists

This happens if you entered a table name below the SQL form that already exists.
The result table name must be unique, so either enter a new name or remove the previous job with the same table name (click on job in Job list, go to Job Overview and click “Remove job” at the bottom).

If you submitted two jobs shortly after each other and you changed the name of the second job, but you still get this error message, then something in the website has not yet updated itself. Wait a few seconds or refresh the page and try again.

Job stays in running state forever (“zombie” jobs)

There is a timeout for each job, for the long queue this is ~ 40 minutes. If your job seems to be running longer than this, it is highly likely that something on the server side went wrong, e.g. the database server was restarted, but the queue was not cleaned up. If you cannot kill your job yourself (highly likely in this case), then please send us a contact message with the job ids of the offending jobs. We will then kill them for you, but leave them in your history in case you still need their metadata.

Job stops with error: Got error 10000 'Error on remote system: 2006: MySQL server has gone away' from FEDERATED

This happens frequently, if there are too many jobs send to the server at once. Then the server can become unresponsive for a short time while dealing with other jobs. Just submit your query again, it will most likely work now.
If this happened when sending many jobs via the command line, please add a short delay between the job submissions, e.g. 1 second. This already helps a lot to avoid this problem.

Job stops with error: Table 'aggregation_tmp_...' already exists

This is an error related to PaQu: when reformulating the queries to match the parallel-server-setup using PaQu, one to many temporary tables named aggregation_tmp_…. are created in between. Sometimes, a confusion of table names happens (we are not yet sure where and why this occurs) and this error is produced. It’s more likely to happen if there are already many jobs running on the server.
Just resubmit your job again until the error does not occur anymore. If the problem remains, please contact us.

Job stops with error: Unknown column '...' in 'field list'

Did you already check for typos in your column names? Does the column really exist for the given table?
If your are sure that there is no problem on this side, then it’s a PaQu-related problem: the parallel query reformulator still has some problems with column names when joining tables (especially when joining more than two). Please have a look at PaQu issues to find solutions and work-arounds. Contact us, if you cannot find a query that works and does what you want. We’ll be happy to help.

Query page says: The database is currently not available.

Our database server is down or unreachable. If there is no corresponding status message above your job list, indicating that there is an ongoing maintenance or problem, then please send a contact message to us to make us aware of the problem.

« Previous PageNext Page »

Proudly powered by Daiquiri
©2016 The CosmoSim databaseImprint and Data Protection Statement