Project

General

Profile

D64 OpenAIRE Maintenance Report » History » Version 46

Jochen Schirrwagen, 15/12/2016 03:16 PM
table data sources

1 1 Marek Horst
h1. D6.4 OpenAIRE Maintenance Report (v1, 9th of December 2016)
2
3
{{toc}}
4
5
h2. Overview
6
7 37 Marek Horst
This document contains information about the deployment, status of OpenAIRE2020 services and content, and the history of major modifications of the system hosted at ICM, Poland. Zenodo repository is hosted at CERN, Switzerland.
8 1 Marek Horst
9
The official maintenance of the OpenAIRE2020 services began on January 1st, 2015 when the project started.
10
11 37 Marek Horst
*TODO: elaborate the following*
12 18 Alessia Bardi
The deliverable will consist in a high-level report on the status of 
13
* OpenAIRE workflows (CNR),
14
* services (ICM), 
15 30 Marek Horst
* and Information Space (UNIBI).
16 1 Marek Horst
17 18 Alessia Bardi
h2. Information Space
18 1 Marek Horst
19 18 Alessia Bardi
Brief description of the data model and status of content, i.e. numbers about data providers, their typology, publications, datasets, links... etc.
20
@UNIBI: your contribution is needed here
21 1 Marek Horst
22 44 Jochen Schirrwagen
The OpenAIRE [[Core Data Model]] comprises the following interlinked entites: **results** ( in form of publications, datasets and patents), **persons**, **organisations**, **funders**, **funding streams**, **projects**, and **data sources** (in form of institutional, thematic, and data repositories, Current Research Information Systems (CRIS), thematic and national aggregators, publication catalogues and entity registries)).
23 43 Jochen Schirrwagen
24
25 46 Jochen Schirrwagen
h3. Data Sources
26
27
|*data source type*|*compatibility*|*count*|
28
| | | |
29
| | | |
30
| | | |
31
| | | |
32
| | | |
33
| | | |
34
| | | |
35
| | | |
36
| | | |
37
| | | |
38
| | | |
39 1 Marek Horst
40 20 Jochen Schirrwagen
h3. Content Status
41
42 18 Alessia Bardi
|*data type*|*count*|
43 45 Jochen Schirrwagen
|publication metadata|17460368|
44
|dataset metadata|3226586|
45 1 Marek Horst
|projects|653268|
46
|organizations|64591|
47 18 Alessia Bardi
|authors|16188328|
48 20 Jochen Schirrwagen
|EuropePMC XML fulltext|1574358|
49
|PDF fulltext|2227458|
50 1 Marek Horst
51 29 Marek Horst
|*inference type*|*count*|
52
|datasets matched|88610|
53
|projects matched|351302|
54
|software urls references|21481|
55
|protein db references|196462|
56
|research initiatives references|7294|
57
|documents classified|2405869|
58
|similar documents found|164602477|
59
|citations matched by reference text|11390293|
60
|citations matched by id|3929053|
61 1 Marek Horst
62
Tables based on IIS report generated on November 20, 2016 for OpenAIRE production infrastructure.
63 30 Marek Horst
64
h3. Zenodo Content Status
65
66
|*data type*|*count or size*|
67
|records|96771|
68
|managed files|181778|
69
|files total size|8TB|
70
71
72 18 Alessia Bardi
73
h2. [[OpenAIRE workflows]]
74
75
The OpenAIRE aggregation system is based on the "D-NET software toolkit":http://www.d-net.research-infrastructures.eu/. D-NET is a service-oriented framework specifically designed to support developers at constructing custom aggregative infrastructures in a cost-effective way. D-NET offers data management services capable of providing access to different kinds of external data sources, storing and processing information objects of any data models, converting them into common formats, and exposing information objects to third-party applications through a number of standard access API. Most importantly, D-NET offers infrastructure enabling services that facilitate the construction of domain-specific aggregative infrastructures by selecting and configuring the needed services and easily combining them to form autonomic data processing workflows. 
76
77
The Enabling Layer contains the Services supporting the application framework. These provide functionalities such as Service registration, discovery, subscription and notification and data transfer mechanisms through ResultSet Services. Most importantly, these Services are configured to orchestrate Services of other layers to fulfil the OpenAIRE specific requirements and implement the *[[OpenAIRE workflows]]*.
78
79
h2. Services
80
81
How the system is maintained
82
83
h3. Software life-cycle
84
85
h4. D-NET services
86
87 1 Marek Horst
The D-NET services are shipped as web applications and deployed on the tomcat application server (v7.0.52) on three distinct systems: dev, beta, production. To support the deployment process all the software artifacts are automatically built on a continuous integration system ("Jenkins":https://ci.research-infrastructures.eu) and hosted on a dedicated maven repository ("nexus":http://maven.research-infrastructures.eu/nexus), while webapp builds are made available via "http server":http://ppa.research-infrastructures.eu/ci_upload. The mentioned tools supporting the software lifecycle are maintained by CNR.
88
89
The D-NET services deployment is performed in subsequent stages:
90
* The development infrastructure plays the role of test bench for the integration of the software developed by different institutions. It is maintained by CNR and runs mostly un-released code and contains mock or subsets of the data available on the production system.
91
* The beta infrastructure runs only released code. It is maintained by ICM and consists of the final integration stage where all the system workflows are tested on the real data (not necessarily the same data as the production system) before making them available to the production system. Although the software running on the beta system is not yet production ready, its portal is publicly accessible in order to showcase new features and data.
92
* The production infrastructure is maintained by ICM and runs only code that was tested on the beta system.
93
94 18 Alessia Bardi
D-NET backend services are packed in four different web applications, each of them running on a dedicated tomcat instance.
95 1 Marek Horst
96 18 Alessia Bardi
h4. Information Inference Service
97 1 Marek Horst
98 26 Marek Horst
"Information Inference Service":https://github.com/openaire/iis versioning and deployment is described on "IIS versioning and deployment":https://issue.openaire.research-infrastructures.eu/projects/openaire/wiki/IIS_versioning_and_deployment wiki page. 
99
100
Formerly IIS was being deployed on CDH4 cluster. Since October 1st, 2015 dedicated "cdh5 branch":https://github.com/openaire/iis/commits/cdh5 was created where new SPARK modules were introduced and existing modules were optimized. On November 20, 2016 for the first time all inferences in production infrastructure were generated by IIS deployed on new CDH5 OCEAN cluster. Both stability and major performance increase were noticed, inference generation time decreased from over 2 days to 12 hours.
101 1 Marek Horst
102 18 Alessia Bardi
h4. Portal
103 1 Marek Horst
104 23 Marek Horst
OpenAIRE portal is hosted at ICM. It uses Joomla! 3.6.2, a free dynamic portal engine and content management system (CMS).
105
106
The Joomla! depends on other upstream applications:
107
108
* Apache 2.4.7
109
* PHP 5.5.9
110
* MySQL 5.5.53
111
* OpenLDAP 2.4.31
112 1 Marek Horst
113 18 Alessia Bardi
h4. Zenodo
114 1 Marek Horst
115
Zenodo repository employs an instance of the Invenio software, developed by CERN.
116
117 31 Marek Horst
Repository is deployed in a production system (https://zenodo.org) and a QA system (https://sandbox.zenodo.org). In total the two systems are running on 30 VMs hosted in CERNs OpenStack infrastructure. All machines are configured using Puppet and are running on top of CERN CentOS 7.2.
118
119
Zenodo/Invenio depends on the following applications:
120
- HAProxy for load balancing
121
- Nginx for serving static content and proxying request to application server
122
- UWSGI as application server for Zenodo/Invenio application
123
- Redis for memory cache
124
- RabbitMQ as message broker
125
- Celery as distributed task queue
126
- PostgreSQL as database
127
- Elasticsearch as search engine
128
129
Deployment process is described at http://zenodo.readthedocs.io/projectlifecycle.html#release-process
130 1 Marek Horst
131 33 Marek Horst
See https://github.com/zenodo/zenodo/commits/production for changes to Zenodo production system (does not include changes to Invenio modules).
132
133 35 Marek Horst
Zenodo repository was relaunched on Invenio v3 alpha on September 12, 2016.
134
135 18 Alessia Bardi
h3. Infrastructure services
136 1 Marek Horst
137 18 Alessia Bardi
Because OpenAIRE2020 services are a continuation and incremental extension of the services already present that resulted from OpenAIRE+ project, so they are still hosted on that same machines. More details are available at "OpenAIRE+ WP5 Maintenance Report":http://wiki.openaire.eu/xwiki/bin/view/OpenAIREplus%20Specific/WP5%20Maintenance%20Report.
138 1 Marek Horst
139 18 Alessia Bardi
h4. Hadoop clusters
140 1 Marek Horst
141 18 Alessia Bardi
h5. DM hadoop cluster
142 1 Marek Horst
143
CDH version: @cdh4.3.1@
144
145
h5. IIS hadoop cluster
146
147 24 Marek Horst
There were two IIS clusters deployed:
148
* old CDH4 IIS cluster, version @cdh4.3.1@, in operation until December 9, 2016 
149 39 Marek Horst
* new CDH5 IIS cluster deployed on March 22, 2016 in OCEAN infrastructure, supports MRv2 on YARN and SPARK
150 1 Marek Horst
151 24 Marek Horst
CDH5 cluster version history:
152
** @5.5.2@ deployment on March 22, 2016
153 27 Marek Horst
** @5.5.2 -> 5.7.5@ upgrade on November 30, 2016
154
** @5.7.5 -> 5.9.0@ upgrade on December 8, 2016
155 24 Marek Horst
156 1 Marek Horst
h4. Databases
157
158 28 Marek Horst
|_database type_|_usage_|_version_|
159
| postgress | statistics | @9.1.23@ |
160 24 Marek Horst
| postgress | DNet services | @9.3.14@ |
161
| mongodb | DNet services | @3.2.6@ |
162
| virtuoso | LOD | @7.2@ |
163 1 Marek Horst
164 41 Marek Horst
h4. Piwik analytics platform
165 18 Alessia Bardi
166 24 Marek Horst
Currently deployed Piwik version: @2.17.1@, since December 6, 2016.
167
168 40 Marek Horst
h4. ownCloud filesync platform
169 24 Marek Horst
170 40 Marek Horst
Deployed at https://box.openaire.eu. Current version: @8.2.7@, since October 19, 2016.
171 18 Alessia Bardi
172 1 Marek Horst
h3. Architectural changes
173 22 Marek Horst
174 1 Marek Horst
[[D64_Servers_Administration_Operations_Changelog|Change Log for servers administration operations]]
175 18 Alessia Bardi
176
h4. Introducing CDH5 IIS cluster hosted in OCEAN infrastructure
177 1 Marek Horst
178
Slave node specification:
179
* Huawei RH1288 V3
180
* CPU: 2x Intel(R) Xeon(R) CPU E5-2680 v3 @ 2.50GHz (24 cores, 48 threads)
181
* RAM: 128GB
182
* HDD: 4x SATA 6TB 7.2K RPM (HDFS)
183
184
Cluster summary (16 slaves):
185
* CPU: 384 cores, 768 threads
186
* RAM: 2048GB
187
* HDD: 384TB (HDFS)
188
189
YARN available resources:
190
* vcores: 640
191
* memory: 1.44TB
192
* HDFS: 344TB
193 18 Alessia Bardi
194 1 Marek Horst
h4. Incorporating resources from old CDH4 IIS cluster into existing DM CDH4 cluster
195
196 25 Marek Horst
This task was possible after shutting down old IIS CDH4 cluster what happened on December 9, 2016.
197 1 Marek Horst
198 25 Marek Horst
h4. Deploying DNet postgress and mongodb databases on separate machines
199
200
Separating @openaire-services@ database instances into dedicated ones (since June 27, 2016):
201
* @openaire-services-postgresql@
202
* @openaire-services-mongodb@
203
204 34 Marek Horst
h4. Updating Zenodo repository infrastructure at CERN
205 1 Marek Horst
206 34 Marek Horst
Several architectural changes were introduced in CERN's infrastructure:
207
* changed storage backend from OpenAFS to CERN EOS (18 PB disk cluster) for better scalability
208
* changed from self-managed MySQL database to CERN Database Team managed PostgreSQL database
209
* deployment of Elasticsearch clusters (6 VMs)
210
* SLC to CentOS 7 on all 30 VMs
211
212 1 Marek Horst
h3. System downtimes
213
214 36 Marek Horst
h4. @ICM
215
216 1 Marek Horst
* [planned] November 14, 2016, 2 hours. #2423 dealing with Linux Dirty COW vulnerability: kernel upgrade, OpenAIRE services restart.
217 36 Marek Horst
218
h4. @CERN
219
220
* [unplanned] September 16, 2016, 3 hours. Preparation of a new load balancer caused an automatic update of CERN outerperimeter firewall that automatically closed access to the operational load balancers.
221
* [planned] September 12, 2016, 8 hours. Complete migration from old infrastructure to new infrastructure
222
* minor incidents until September 12, 2016 due to overload of the legacy system