Skip to end of metadata
Go to start of metadata

You are viewing an old version of this page. View the current version.

Compare with Current View Page History

« Previous Version 8 Next »

It's a pleasure to present you our third newsletter. We try to keep the release schedule close to one, not exceeding two months, balanced between being informational and not too chatty.


Apart from the regular project progress and IT news, there are quite some chapters on policies that will affect how observations will be done and what is required to access data in the future. There's also a section on licensing issues of data no longer embargoed.


Taras Yakobchuk introduces the new tool he is developing for visualizing and analyzing calibrated GRIS/GREGOR data. The tool is not only intended to help experts analyzing data offered by SDC, but it should also allow access to laypersons who are not experts in dealing with this type of data.


We would like to encourage you to openly comment on any parts. Feedback is always welcome and helps us to deliver a better product.


Editorial


📰 Editorial


🔒 What goes here?  

Peter Caligari

Project Status


SDC Project Status 03-2021 (06.07.2021)


Petri Kehusmaa

Solution Development and Integration Phases.

Solution Development and Integration.

Project has now shifted into a phase where we are building the actual SDC platform and creating/acquiring all necessary components. These components are inhouse developed software for instrument pipelines and analysis, compute, network and storage hardware, middleware (RUCIO, Kubernetes, Docker etc.) and governance/management/documentation software like Jira Service Management and Confluence.

There are still some work to be done to find all suitable solution components and thus shaping the final scope of SDC. We aim to build SDC as service platform for solar community with continuous focus on users and platform development.

📋 Summary

Current project health

Current project status

Project constraints

YELLOW

Finalizing some tasks for solution design and creating solution components. 

Governance model not finalized and implementation not started yet.

Resources and their availability.

Technology POCs taking more time than predicted.

 📊 Project status

Accomplishments

  • High-level solution design

  • Some software components created (GRIS Viewer)

  • Hardware acquisition process started

  • RUCIO test environment established

 

Next steps

  • Continue selecting solution components and creating solution components

 

Risks & project issues

  • Lack of resources

  • Resource availability

  • Multiple process implementations at the same time

  • No agreed governance model

Governance


👩‍⚖️ Policies, Frameworks & Governance


  • ITIL v4 process model going to be partially adopted for service management purposes

  • Data policies definition started

  • SDC governance model and scope to be decided

📜

Products & Tools


🛠 SDC Products & Tools


Standardized GRIS Pipeline

The GRIS reduction pipeline was merged to a common version in collaboration with M. Collados. The version running at OT and Freiburg now both produce data which is compatible with downstream SDC tools. The latest version of the pipeline can always be found on the KIS GitLab server. The current OT version will be synced to the ulises branch and merged into the main production branch periodically.

SDC data archive

https://sdc.leibniz-kis.de/

Get access to data from GRIS/GREGOR and LARS/VTT instruments and the ChroTel full-disc telescope at OT.

Updates as of July 2021

  • The detail pages for observations have been reworked see an example here:

    • Added dynamic carousel of preview data products

    • Added flexible selection for downloading associated data

  • VFISV inversion results have been added for most of the GRIS observations. The website now includes information on line of sight velocity and magnetic field strength

  • Development process has streamlined:

    • automated test deployments for quicker iterations and fixes

    • Changes to the UI will occur in regular sprints. We’re currently collecting ideas here

  • Added historic ChroTel data for 2013, thanks to Andrea Diercke from AIP for contacting us and providing us with this supplemental archive.

Conferences & Workshops


📊 Conferences & Workshops


Nazaret Bello Gonzalez

Forthcoming Conferences/Workshops of Interest 2021

Every second Thursdays, 12:30-13:30 CET

PUNCH Lunch Seminar (see SDC calendar invitation for zoom links)

  • 11 Feb 2021: PUNCH4NFDI and ESCAPE - towards data lakes

  • 25 Feb 2021: PUNCH Curriculum Workshop

April week 12-16 (3 days, TBD)

ESCAPE WP4 Technology Forum 

June 01-02 (16:00 - 17:30)

15th International dCache Workshop

June 10-11

3th International Workshop on Science Gateways |  IWSG 2021

Topics:

  • Architectures, frameworks and technologies for science gateways

  • Science gateways sustaining productive collaborative communities

  • Support for scalability and data-driven methods in science gatewayS

  • Improving the reproducibility of science in science gateways

  • Science gateway usability, portals, workflows and tools

  • Software engineering approaches for scientific work

  • Aspects of science gateways, such as security and stability

June 28, 2021:

Data-intensive radio astronomy: bringing astrophysics to the exabyte era

Topics: 

  • Data-intensive radio astronomy, current facilities and challenges

  • Data science and the exascale era: technical solutions within astronomy

  • Data science and the exascale era: applications and challenges outside astronomy

SDC participation in Conferences & Workshops

Nov. 26, 2020:

2nd SOLAR net Forum Meeting for Telescopes and Databases

Talk:  Big Data Storage -- The KIS SDC case, NBG, PC & PK, 2nd SOLARNET Forum (Nov 26)
Nazaret Bello GonzalezPetri Kehusmaa Peter Caligari

SDC Collaborations


🤲 SDC Collaborations


 Nazaret Bello Gonzalez

SOLARNET https://solarnet-project.eu

KIS coordinates the SOLARNET H2020 Project that brings together European solar research institutions and companies to provide access to the large European solar observatories, supercomputing power and data. KIS SDC is actively participating in WP5 and WP2 in coordinating and developing data curation and archiving tools in collaborations with European colleagues.
Contact on KIS SDC activities in SOLARNET: Nazaret Bello Gonzalez nbello@leibniz-kis.de

 ESCAPE https://projectescape.eu/

KIS is a member of the European Science Cluster of Astronomy & Particle Physics ESFRI Research Infrastructures (ESCAPE H2020, 2019 - 2022) Project aiming to bring together people and services to build the European Open Science Cloud. KIS SDC participates in WP4 and WP5 to bring ground-based solar data into the broader Astronomical VO and the development tools to handle large solar data sets. 

Contact on KIS SDC activities in ESCAPE: Nazaret Bello Gonzalez nbello@leibniz-kis.de

 

EST https://www.est-east.eu/

KIS is one of the European institutes strongly supporting the European Solar Telescope project. KIS SDC represents the EST data centre development activities in a number of international projects like ESCAPE and the Group of European Data Experts (GEDE-RDA).

Contact on KIS SDC as EST data centre representative: Nazaret Bello Gonzalez nbello@leibniz-kis.de

 

PUNCH4NFDI https://www.punch4nfdi.de

KIS is a participant (not a member) of the PUNCH4NFDI Consortium. PUNCH4NFDI is the NFDI (National Research Data Infrastructure) consortium of particle, astro-, astroparticle, hadron and nuclear physics, representing about 9.000 scientists with a Ph.D. in Germany, from universities, the Max Planck Society, the Leibniz Association, and the Helmholtz Association. PUNCH4NFDI is the setup of a federated and "FAIR" science data platform, offering the infrastructures and interfaces necessary for the access to and use of data and computing resources of the involved communities and beyond. PUNCH4NFDI is currently competing with other consortia to be funded by the DFG (final response expected in spring 2021). KIS SDC aims to become a full member of PUNCH and federate our efforts on ground-based solar data dissemination to the broad particle and astroparticle communities.

Contact on KIS SDC as PUNCH4NFDI participant: Nazaret Bello Gonzalez nbello@leibniz-kis.de & Peter Caligari mailto:cale@leibniz-kis.de 

 IT news


🖥 IT news


Peter Caligari

Ongoing & Future developments

Webpage

KIS The design of the new website is essentially complete. We are currently making some final technical adjustments to the webserver and Typo3. The website is already running at the deployment (VM-ware) server at KIS and is already publicly available at the web address:

https://newwww.leibniz-kis.de

After the content has been moved, the server will be renamed http://www.leibniz-kis.de , and the old site will be shut down.

One of the reasons for the relaunch was to increase support of the particular browsers used by people with disabilities. This requires specific fields in the back-end to be filled in so that the page content can be appropriately classified. We will have a training course on handling the typo3 back-end in general, focusing on the above points on 

July 13 & 14, 2021, 10:00 CEST (Editors' training)

We currently plan to avoid any user login in the frontend. This would allow us to not have to use cookies at all, rendering the need to use these annoying GDPR popups obsolete. However, this means we might not have any restricted areas on the website at all (including an Intranet)! This is a radical approach, and we might not be able to stringently follow through with this (see below). In that case, the Intranet on the website will be limited to purely informational pages; any documents now downloadable on the old website should be migrated to the cloud (wolke7). Anyhow, Typo3 allows hosting multiple websites under a single installation sharing the basic design and resources. Therefore, any websites requiring user registration and login (like the Intranet or a possible OT-webpage) might be built as separate websites, keeping the publicly accessible website login-free. 

Network

Status of the dedicated 10 Gbit line between KIS & OT

KIS OT The missing network equipment for the end at KIS will be installed in the second week of July. We will then try to establish the link remotely from Freiburg with the help of personnel at the telescopes.

Test of (application) firewalls at KIS

KIS OT Firewall testing at KIS (see https://leibniz-kis.atlassian.net/l/c/rF8kmXjv ) has terminated. Two manufacturers are still beeing considered, and a final choice will be made as soon as possible.

We (IT) still very much advocate going for high-availability setups for KIS and OT (in Freiburg) because KIS will host a significant part of SDC and OT because there's no trained personnel on-site, and replacements to the Canary islands take time).

Storage

KIS SDC We currently set up a DELL R740XD2 as a (fake) dCache cluster running two (redundant) dCache pools offering a net capacity of about 100 TB to KIS, alleviating the currently pressing storage shortage. This host serves as a testbed to simulate hardware and network failures in the dCache cluster to come. Starting in July, six more comparable hosts will be purchased through a public tender. These will have a similar setup and form storage Tier1 (near-line) of SDC at KIS. We expect the hosts to arrive in late September.

SDC In parallel, we are looking into outsourcing seldomly accessed files to the public cloud. Within the framework of the SDC, it is planned to use the latter mainly to flexibly cover short-term peaks in demand. 

The costs per TB of storage space in the cloud are strongly dependent on capacity and, above all, the access pattern. They vary between approx. 60-200 €/TB/a. Access-independent models, in which only a fixed fee is charged per stored GB, but no fees for downloading or uploading, are at the upper end of this scale. At the lower end are public providers such as Amazon, Google and Microsoft, which charge a relatively high fee for each type of data access in addition to the (relatively cheap) price of simple storage.

Additionally, licence fees of a similar magnitude for the software that moves files between the cloud and the local storage at the KIS are required. 

We are currently obtaining concrete offers to outsource 100 TB for 1 year to a public cloud. The pricing models are so complicated that we can determine the resulting costs only through a limited real-world test. 

We will intentionally design the integration so that it will become apparent to all users which files are in the cloud and which are not. Although this is cumbersome (and artificially induced), we deem this awareness essential (at least initially, where we have no experience of the potential costs involved). The exact model is still to be worked out, and we will inform you about it again in due course. 

OT The two new nodes for jane arrived at OT. The installation will be done as soon as either Peter Caligari can travel there or we get a technician of DELL up to the telescopes. Due to Covid-19, the time scale for this installation remains unclear. We will keep you informed.

Current Resources

Compute nodes

hostname

# of CPUs & total cores

ram [GB]

patty KIS
legs & louie KIS (installed but not publicly available yet. Nearly there…)

2 x AMD EPYC 7742, 128 cores

1024

itchy & selma KIS

4 x Xeon(R) CPU E5-4657L v2 @ 2.40GHz, 48 cores

512

scratchy KIS
quake &halo KIS/SEISMO

hathi OT

4 x Intel(R) Xeon(R) CPU E5-4650L @ 2.60GHz, 32 cores

512

Central storage space

Total available disk space for /home (KIS OT), /dat (KIS OT), /archive (KIS), /instruments (OT)

name

total [TB, brutto]

free [TB, brutto]

mars KIS

758

39

quake KIS/SEISMO

61

0

halo KIS/SEISMO

145

44,5

jane OT

130 (-> 198)

23


 References

📎 References

Products & Tools

Forthcoming Conferences/Workshops

Collaborations

  • No labels