You can subscribe to this list here.
2007 |
Jan
|
Feb
|
Mar
|
Apr
|
May
|
Jun
|
Jul
(3) |
Aug
|
Sep
(1) |
Oct
|
Nov
|
Dec
(1) |
---|---|---|---|---|---|---|---|---|---|---|---|---|
2008 |
Jan
(2) |
Feb
(1) |
Mar
(1) |
Apr
|
May
|
Jun
|
Jul
|
Aug
(1) |
Sep
|
Oct
|
Nov
(7) |
Dec
|
2009 |
Jan
|
Feb
|
Mar
|
Apr
|
May
|
Jun
|
Jul
|
Aug
|
Sep
(1) |
Oct
|
Nov
(1) |
Dec
|
2010 |
Jan
(1) |
Feb
|
Mar
(1) |
Apr
(6) |
May
|
Jun
|
Jul
|
Aug
|
Sep
|
Oct
|
Nov
|
Dec
|
2011 |
Jan
(2) |
Feb
|
Mar
|
Apr
|
May
|
Jun
(1) |
Jul
|
Aug
|
Sep
|
Oct
|
Nov
|
Dec
|
2012 |
Jan
|
Feb
|
Mar
(1) |
Apr
|
May
|
Jun
|
Jul
|
Aug
(1) |
Sep
|
Oct
|
Nov
|
Dec
|
2013 |
Jan
|
Feb
|
Mar
|
Apr
|
May
|
Jun
(2) |
Jul
|
Aug
|
Sep
(1) |
Oct
|
Nov
|
Dec
|
2014 |
Jan
|
Feb
|
Mar
|
Apr
|
May
|
Jun
|
Jul
|
Aug
|
Sep
(1) |
Oct
(2) |
Nov
|
Dec
|
2015 |
Jan
(3) |
Feb
|
Mar
|
Apr
|
May
|
Jun
|
Jul
|
Aug
(2) |
Sep
|
Oct
|
Nov
|
Dec
|
2016 |
Jan
|
Feb
|
Mar
|
Apr
|
May
(1) |
Jun
|
Jul
|
Aug
|
Sep
|
Oct
|
Nov
|
Dec
|
2017 |
Jan
|
Feb
|
Mar
|
Apr
|
May
|
Jun
|
Jul
|
Aug
(1) |
Sep
(2) |
Oct
|
Nov
(1) |
Dec
(1) |
2018 |
Jan
(1) |
Feb
|
Mar
|
Apr
|
May
|
Jun
|
Jul
(1) |
Aug
|
Sep
|
Oct
|
Nov
|
Dec
|
From: Sandra P. <pr...@in...> - 2018-07-24 17:33:11
|
Dear all, we currently put a lot of effort into launching the DBpedia Databus and its services soon. However, we still like to incoprorate some more contributions from the DBpedia community into the process. Thus, we are looking for beta-testers for our data release tool and volunteers for our mappings extraction. Part I - Call for beta-testers We are currently working on a Data Release Tool for publishing datasets on the DBpedia Databus. We hope to start first beta tests by the end of August. So, if you... 1) have a dataset and 2) want to contribute to a rapid launch of the Databus, we invite you to be a beta-tester for our Data Release Tool. Sign<https://goo.gl/forms/q4aiq5lfqCOpiklg2> up here<https://goo.gl/forms/q4aiq5lfqCOpiklg2> and we will feed you with all the necessary information asap. The only requirement: You need to have a WebID. Therefore, we designed a hands-on tutorial on how to set up your own WebID. Give it a Try <https://github.com/dbpedia/webid> <https://github.com/dbpedia/webid> and be part of the game. We are looking forward to your Feedback. Part II - Call for volunteers for the mappings extraction Additionally, we are also looking for volunteers for our mappings extraction. Feel free to drop us a line via db...@in... or sign up here. <https://goo.gl/forms/36FuQytm53n7YcfE2> Thank you and we are really looking forward to your contribution Kind regards Sandra Sandra Prätor, Mag. Artium DBpedia Association phone: +49 341 2290 3793 e-mail: pr...@in... Institute for Applied Informatics (InfAI) adjunct Institute at the University of Leipzig Goerdelerring 9 | 04109 Leipzig Registergericht: AG Leipzig | Registernummer: VR 4342 http://www.infai.org<http://www.infai.org/> <http://wiki.dbpedia.org>http://wiki.dbpedia.org<http://wiki.dbpedia.org/> |
From: ngoko <ng...@li...> - 2018-01-06 10:29:12
|
Please, accept our apologies in case of multiple copies of this CFP. ******************************************************************* The 7th International Workshop on Parallel and Distributed Computing for Large Scale Machine Learning and Big Data Analytics May 21, 2018 Vancouver, British Columbia CANADA http://parlearning.ecs.fullerton.edu In Conjunction with 32nd IEEE International Parallel & Distributed Processing Symposium. Scaling up machine-learning (ML), data mining (DM) and reasoning algorithms from Artificial Intelligence (AI) for massive datasets is a major technical challenge in the time of "Big Data". The past ten years have seen the rise of multi-core and GPU based computing. In parallel and distributed computing, several frameworks such as OpenMP, OpenCL, and Spark continue to facilitate scaling up ML/DM/AI algorithms using higher levels of abstraction. We invite novel works that advance the trio-fields of ML/DM/AI through development of scalable algorithms or computing frameworks. Ideal submissions should describe methods for scaling up X using Y on Z, where potential choices for X, Y and Z are provided below. Scaling up • Recommender systems • Optimization algorithms (gradient descent, Newton methods) • Deep learning • Sampling/sketching techniques • Clustering (agglomerative techniques, graph clustering, clustering heterogeneous data) • Classification (SVM and other classifiers) • SVD and other matrix computations • Probabilistic inference (Bayesian networks) • Logical reasoning • Graph algorithms/graph mining and knowledge graphs • Semi-supervised learning • Online/streaming learning • Generative adversarial networks Using • Parallel architectures/frameworks (OpenMP, OpenCL, OpenACC, Intel TBB) • Distributed systems/frameworks (GraphLab, Hadoop, MPI, Spark) • Machine learning frameworks (TensorFlow, PyTorch, Theano, Caffe) On • Clusters of conventional CPUs • Many-core CPU (e.g. Xeon Phi) • FPGA • Specialized ML accelerators (e.g. GPU and TPU) IMPORTANT DATES • Paper submission: February 16, 2018 AoE • Notification: March 16, 2018 • Camera Ready: March 30, 2018 PAPER GUIDELINES Submitted manuscripts should be upto 10 single-spaced double-column pages using 10-point size font on 8.5x11 inch pages (IEEE conference style), including figures, tables, and references. Format requirements are posted on the IEEE IPDPS web page. All submissions must be uploaded electronically at TBA TRAVEL AWARDS Students with accepted papers can apply for a travel award. Please find details at www.ipdps.org |
From: Sandra P. <pr...@in...> - 2017-12-04 15:36:44
|
Sorry for cross-posting ! Dear all, our next edition of the DBpedia Newsletter is in the midst of preparation. This time we like to feature more news from the DBpedia community - more news from YOU! Do you have any news you like to be published and shed more light on? Such as: * technical developments and advancements, * new tools, applications, demos, show-cases and the like * any news in your chapter * other DBpedia related topics? Yes? Share your ideas here<https://docs.google.com/document/d/1Bi3RYwEUimixoht8ImkFQ_ibF-_HOI50MrE5xEVw4js/edit> or write a short paragraph and forward it to us via db...@in... All the best yours DBpedia Association Sandra Prätor, Mag. Artium DBpedia Association phone: +49 341 97 32355 e-mail: pr...@in... Institute for Applied Informatics (InfAI) adjunct Institute at the University of Leipzig Hainstraße 11 | 04109 Leipzig Registergericht: AG Leipzig | Registernummer: VR 4342 http://www.infai.org<http://www.infai.org/> |
From: Sandra P. <pr...@in...> - 2017-11-06 11:38:20
|
To all DBpedians out there, at the moment, DBpedia has around 20 language chapters, which are concerned with improving the extraction of data from language-specific Wikipedia versions. The DBpedia Association wants to provide the community the best basis to set up new chapters as well as to develop the existing ones further. For chapter support, the DBpedia Association plans to establish a position to keep things running smoothly. In case you are interested in the position let us know and apply here<https://goo.gl/forms/MHBX0fmf7QsOkbIB3> or forward these information to people who might be inclined in supporting the DBpedia team. all the best Sandra on behalf of the DBpedia Association Institute for Applied Informatics (InfAI) Hainstraße 11, 04109 Leipzig phone: +49 341 97 32355 e-mail: pr...@in... https://www.facebook.com/dbpedia.org/<https://exch.informatik.uni-leipzig.de/owa/redir.aspx?C=biVgtXrkqLDo1BFJdW8IEhW8vTzh7UcawrEBumyyTNkI2oqDgoHUCA..&URL=https%3a%2f%2fwww.facebook.com%2fdbpedia.org%2f> |
From: Niketa <ni...@gm...> - 2017-09-30 20:35:26
|
Final Call for Papers South Asian University, Delhi, India December 14-16, 2017 Apologies for cross-posting. Kindly help to distribute this final CFP to your mailing list. 17th International Conference on Intelligent Systems Design and Applications (ISDA 2017) http://www.mirlabs.net/isda17/ 17th International Conference on Hybrid Intelligent Systems (HIS 2017) http://www.mirlabs.net/his17/ 7th World Congress on Information and Communication Technologies (WICT 2017) http://www.mirlabs.net/wict17/ 9th World Congress on Nature and Biologically Inspired Computing (NaBIC 2017) http://www.mirlabs.net/nabic17/ ISDA 2017, HIS 2017, WICT 2017, NaBIC 2017: Scopus & UGC Approved Proceedings All accepted and registered papers will be published in AISC Series of Springer, indexed in ISI Proceedings, EI-Compendex, DBLP, SCOPUS, Google Scholar and Springerlink. (Listed in UGC approved list, please refer List 1- Page 32 - S.No. - 1375) (http://www.ugc.ac.in/pdfnews/8919877_Journals-1.pdf) ** Important Dates ** (Extended) ---------------------------- Paper submission due: October 21, 2017 Notification of paper acceptance: November 05, 2017 Registration and Final manuscript due: November 15, 2017 Conference: December 14 - 16, 2017 ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ Submission Guidelines: --------------------------------- Submission of paper should be made through the submission page from the conference web page. Please refer to the conference website for guidelines to prepare your manuscript. Paper format templates: http://www.springer.com/series/11156 ISDA’17 Submission Link: https://easychair.org/conferences/?conf=isda2017 HIS’17 Submission Link: https://easychair.org/conferences/?conf=his20171 WICT’17 Submission Link: https://easychair.org/conferences/?conf=wict2017 NaBIC’17 Submission Link: https://easychair.org/conferences/?conf=nabic2017 ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ * Organizing Committee * ---------------------------------- General Chairs: Ajith Abraham, Machine Intelligence Research Labs, USA Pranab Kr. Muhuri, South Asian University, Delhi Technical Committee (Please refer website): http://www.mirlabs.net/isda17/committees.php http://www.mirlabs.net/his17/committees.php http://www.mirlabs.net/wict17/committees.php http://www.mirlabs.net/nabic17/committees.php For technical contact: ---------------------------------- Ajith Abraham Email: aji...@ie... |
From: Valentine C. <val...@eu...> - 2017-09-05 08:55:24
|
Dear all, I would like to invite you to join us to the workshop: Linked Data quality assessment and improvement - from academia to industry A satellite workshop at Semantics 2017 conference in Amsterdam. https://2017.semantics.cc/satellite-events/linked-data-quality-assessment-and-improvement-academia-industry <https://2017.semantics.cc/satellite-events/linked-data-quality-assessment-and-improvement-academia-industry>Time: Thursday, September 14, 2017 - 09:00 to 17:30 Place: The Meervaart Theatre in Amsterdam (Room 10) Organized by Valentine Charles and Antoine Isaac (Europeana Foundation), Amrapali Zaveri (University of Maastricht), Wouter Beek (VU University Amsterdam), Péter Király (GWDG Göttingen) Registration (note that registration is possible for workshop only) https://2017.semantics.cc/prices Description There is an unprecedented amount of data on the Web today. However, this data is only as useful as its quality allows it to be. Data Quality is an important topic in companies and organizations that use and/or disseminate large and heterogeneous Linked Datasets. These issues do not only arise in Linked Open Data, but also in Linked Business Data. At the same time, researchers are coping with Linked Data Quality issues as well, and have defined, proposed and evaluated approaches and methodologies for assessing and improving Linked Data quality. Currently there is not one fool-proof method for dealing with Linked Data Quality issues. This workshop brings together Linked Data experts from different domains, from academia as well as industry, allowing them to share their ideas, approaches, and lessons learnt with one another. Intended audience Everybody who has published and/or consumed Linked Data and has run into Linked Data Quality issues. Program 09.00-09.15 Welcome and outline of the day 09.15-11.00 Presentations about how Linked Data Quality issues are encountered and addressed in the following domains: - Life Science & Health Care, Amrapali Zaveri - FHIR modeling and standardization, Eric Prud’hommeaux - Academic Publishing, Elsevier, Véronique Malaisé - Cultural Heritage, Europeana, Valentine Charles and Antoine Isaac - Wikidata validation, Andra Waagmeester - Manufacturing, Semaku, John Walker - Open Government, Kadaster, Rein van de Veer & Erwin Folmer - Real Estate, Geo Phy, Dimitris Kontokostas 11.30-12.30 Panel with the presenters 13.30-14.00 The audience will also be invited to share their own experiences with Linked Data Quality in a series of short lightning talks. 14.00-15.00 hands-on activities around a set of existing tools that allow data quality to be analyzed and improved. Tools include LOD Laundromat, Loupe, and Luzzu. 15.30-16.30 Joint session with DBpedia 16.40-17.30 Wrap-up and conclusion of the workshop (recap of the needs for future work and collaboration.) Valentine Charles | Data R&D Coordinator T: +31 (0)70 314 1182 E: val...@eu...<mailto:val...@eu...> | Skype: charles.valentine Be part of Europe's online cultural movement - join the Europeana Network Association: http://bit.ly/NetworkAssociation | #AllezCulture | @Europeanaeu<https://twitter.com/Europeanaeu> | Europeana Pro website<http://pro.europeana.eu/> Disclaimer: This email and any files transmitted with it are confidential and intended solely for the use of the individual or entity to whom they are addressed. If you have received this email in error please notify the system manager. If you are not the named addressee you should not disseminate, distribute or copy this email. Please notify the sender immediately by email if you have received this email by mistake and delete this email from your system. |
From: Sandra P. <pr...@in...> - 2017-08-21 09:20:57
|
Dear DBpedia enthusiasts, We are happy to announce that, apart from organizing the DBpedia Day, we are also part of the workshop day at SEMANTiCS 2017, September 11th. Our half-day workshop Two worlds, one goal: A Reliable Linked Data ecosystem for media - Wolters Kluwer & DBpedia aims at exploring major topics for publishers and libraries from DBpedia’s and Wolters Kluwer’s perspective. We will dive into core areas like Interlinking, Metadata and Data Quality and address challenges such as fundamental requirements when publishing data on the web. Did we spark your interest? Check our detailed program<http://bit.ly/2iji9ev> <https://2017.semantics.cc/satellite-events/interlinking-id-management-metadata-data-quality> <https://2017.semantics.cc/satellite-events/interlinking-id-management-metadata-data-quality> <http://bit.ly/2iji9ev> and get your ticket today. We are looking forward meeting you. All the best Sandra on behalf of DBpedia and Wolters Kluwer DBpedia Association Institute for Applied Informatics (InfAI) Hainstraße 11, 04109 Leipzig phone: +49 341 97 32355 e-mail: pr...@in... https://www.facebook.com/dbpedia.org/<https://exch.informatik.uni-leipzig.de/owa/redir.aspx?C=biVgtXrkqLDo1BFJdW8IEhW8vTzh7UcawrEBumyyTNkI2oqDgoHUCA..&URL=https%3a%2f%2fwww.facebook.com%2fdbpedia.org%2f> |
From: Vincent B. <vin...@fu...> - 2016-05-09 07:44:32
|
Hello everyone, my name is Vincent and I will work on the project " A Hybrid Classifier/Rule-based Event Extractor for DBpedia" during this year's Google Summer Of Code. I'm happy to work for DBpedia Spotlight and with you during the summer. You can find the abstract to my project below. Sincerly, Vincent Bohlen Abstract: In modern times the amount of information published on the internet is growing to an immeasurable extent. Humans are no longer able to gather all the available information by hand but are more and more dependent on machines collecting relevant information automatically. This is why automatic information extraction and in especially automatic event extraction is important. In this project I will implement a system for event extraction using Classification and Rule-based Event Extraction. The underlying data for both approaches will be identical. I will gather wikipedia articles and perform a variety of NLP tasks on the extracted texts. First I will annotate the named entities in the text using named entity recognition performed by DBpedia Spotlight. Additionally I will annotate the text with Frame Semantics using FrameNet frames. I will then use the collected information, i.e. frames, entities, entity types, with the aforementioned two different methods to decide if the collection is an event or not. |
From: Ted T. Jr <tth...@op...> - 2015-08-10 16:17:45
|
Hello, Everyone -- Expanding our Pay-As-You-Go (PAGO) offerings of Virtuoso on the Amazon EC2 Cloud, the pre-configured and pre-loaded Virtuoso + DBpedia AMI [1][2] is now available, in addition to the Virtuoso Cloud Edition AMI [3][4]. These instances are EBS-backed AMIs, meaning: * The Virtuoso DBMS Server is preinstalled with basic tuning for the host operating system and environment. * You can stop and restart the AMI without terminating it -- so changes and current runtime state persist. * With the hourly model, you pay only for the time the AMI is used. * With the new DBpedia AMI, you immediately have your own copy of DBpedia available, without having to load-your-own. We look forward to your reactions and continued feedback! Please enjoy! Ted [1] http://virtuoso.openlinksw.com/dataspace/doc/dav/wiki/Main/VirtPayAsYouGoEBSBackedAMIDBpedia2015 [2] http://kidehen.blogspot.com/2015/08/dbpedia-pay-as-you-go-pago-cloud-edition.html [3] http://virtuoso.openlinksw.com/dataspace/doc/dav/wiki/Main/VirtPayAsYouGoEBSBackedAMI [4] http://kidehen.blogspot.com/2015/08/virtuoso-pay-as-you-go-cloud-edition.html -- A: Yes. http://www.idallen.com/topposting.html | Q: Are you sure? | | A: Because it reverses the logical flow of conversation. | | | Q: Why is top posting frowned upon? Ted Thibodeau, Jr. // voice +1-781-273-0900 x32 Senior Support & Evangelism // mailto:tth...@op... // http://twitter.com/TallTed OpenLink Software, Inc. // http://www.openlinksw.com/ 10 Burlington Mall Road, Suite 265, Burlington MA 01803 Weblog -- http://www.openlinksw.com/blogs/ LinkedIn -- http://www.linkedin.com/company/openlink-software/ Twitter -- http://twitter.com/OpenLink Google+ -- http://plus.google.com/100570109519069333827/ Facebook -- http://www.facebook.com/OpenLinkSoftware Universal Data Access, Integration, and Management Technology Providers |
From: Ted T. Jr <tth...@op...> - 2015-08-06 20:29:10
|
Hello, again -- We've been running the public DBpedia service for several years, and have recently started publishing periodic reports of usage and other analysis, which we're aiming to refresh roughly quarterly going forward. The latest report, with usage data through July 31, 2015, is now available here -- http://bit.ly/1IL35Xu We look forward to your reactions! Ted -- A: Yes. http://www.idallen.com/topposting.html | Q: Are you sure? | | A: Because it reverses the logical flow of conversation. | | | Q: Why is top posting frowned upon? Ted Thibodeau, Jr. // voice +1-781-273-0900 x32 Senior Support & Evangelism // mailto:tth...@op... // http://twitter.com/TallTed OpenLink Software, Inc. // http://www.openlinksw.com/ 10 Burlington Mall Road, Suite 265, Burlington MA 01803 Weblog -- http://www.openlinksw.com/blogs/ LinkedIn -- http://www.linkedin.com/company/openlink-software/ Twitter -- http://twitter.com/OpenLink Google+ -- http://plus.google.com/100570109519069333827/ Facebook -- http://www.facebook.com/OpenLinkSoftware Universal Data Access, Integration, and Management Technology Providers |
From: Luigi A. <lui...@gm...> - 2015-01-19 14:13:59
|
Allo Allo! I ve been checking on dbpedia instead of working on pure wikipedia dump, and got questions about consistency of ontologies against the dump of original wikipedia. See example: 1. grep 'Terminator_2' mappingbased_properties_en.ttl won't report the starring actors. instead, see: http://en.wikipedia.org/wiki/Terminator_2:_Judgment_Day 2. page_links_en.ttl seems to report truncations: http://en.wikipedia.org/wiki/Islamic_State_of_Iraq_and_the_Levant is missing in: Page_ids_en.ttl while present in page_links_en.ttl and redirects_en.ttl I found 'Islamic_State_of_Iraq_and_the_Levant <http://en.wikipedia.org/wiki/Islamic_State_of_Iraq_and_the_Levant>' is reported as: 'Islamic_State_of_Iraq <http://en.wikipedia.org/wiki/Islamic_State_of_Iraq_and_the_Levant>' in page_ids_en.ttl, but this leads to inconsistencies with the other files. Is it a case of truncation or other errors? 3. Is there any way to synchronize live dbpedia with wikipedia (dbpedia -live-mirror), without installing the whole virtuoso db. just to extact data (via python ? ) one may need from the original dump (e.g. get the same .ttl files) 4. Eventually, does dbpedia use a lxml parser for wiki dump (whole dump)? Would is be open source to adjust it ? I tried to use it, but i fail since there are symbols like <C8> in the middle of the <page> tag, not closed : they are not correct xml tags (indeed they should be symbols for some char i think), and so the iteration fail with not clean results. Thank you so much, i hope my little notes could be of help too. Luigi |
From: Ted T. Jr <tth...@op...> - 2015-01-07 20:49:10
|
Hello, all -- We've just published the latest DBpedia Usage Report [1], covering v3.3 (released July, 2009) to v3.9 (released September, 2013); v3.10 (sometimes called "DBpedia 2014"; released September, 2014) will be included in the next report. We think you'll find some interesting details in the statistics. There are also some important notes about Virtuoso configuration options and other sneaky technical issues that can surprise you (as they did us!) when exposing an ad-hoc query server to the world. We welcome your comments, suggestions, and other feedback! Ted [1] http://bit.ly/1DymR8p -- A: Yes. http://www.idallen.com/topposting.html | Q: Are you sure? | | A: Because it reverses the logical flow of conversation. | | | Q: Why is top posting frowned upon? Ted Thibodeau, Jr. // voice +1-781-273-0900 x32 Senior Support & Evangelism // mailto:tth...@op... // http://twitter.com/TallTed OpenLink Software, Inc. // http://www.openlinksw.com/ 10 Burlington Mall Road, Suite 265, Burlington MA 01803 Weblog -- http://www.openlinksw.com/blogs/ LinkedIn -- http://www.linkedin.com/company/openlink-software/ Twitter -- http://twitter.com/OpenLink Google+ -- http://plus.google.com/100570109519069333827/ Facebook -- http://www.facebook.com/OpenLinkSoftware Universal Data Access, Integration, and Management Technology Providers |
From: Ted T. Jr <tth...@op...> - 2015-01-07 20:46:20
|
Hello, all -- We've just published the latest DBpedia Usage Report, covering v3.3 (released July, 2009) to v3.9 (released September, 2013); v3.10 (sometimes called "DBpedia 2014"; released September, 2014) will be included in the next report. We think you'll find some interesting details in the statistics. There are also some important notes about Virtuoso configuration options and other sneaky technical issues that can surprise you (as they did us!) when exposing an ad-hoc query server to the world. We welcome your comments, suggestions, and other feedback! Ted -- A: Yes. http://www.idallen.com/topposting.html | Q: Are you sure? | | A: Because it reverses the logical flow of conversation. | | | Q: Why is top posting frowned upon? Ted Thibodeau, Jr. // voice +1-781-273-0900 x32 Senior Support & Evangelism // mailto:tth...@op... // http://twitter.com/TallTed OpenLink Software, Inc. // http://www.openlinksw.com/ 10 Burlington Mall Road, Suite 265, Burlington MA 01803 Weblog -- http://www.openlinksw.com/blogs/ LinkedIn -- http://www.linkedin.com/company/openlink-software/ Twitter -- http://twitter.com/OpenLink Google+ -- http://plus.google.com/100570109519069333827/ Facebook -- http://www.facebook.com/OpenLinkSoftware Universal Data Access, Integration, and Management Technology Providers |
From: Kingsley I. <ki...@op...> - 2014-10-28 16:02:52
|
On 10/28/14 11:34 AM, maria terzi wrote: > Hello, > Can someone please advise me on when is a good time to access Dbpedia > resources? > It seems that is under maintenance at the moment. > > Thanks, > Maria Its alive. Is there a particular URI that isn't resolving for you? Kingsley > > > ------------------------------------------------------------------------------ > > > _______________________________________________ > Dbpedia-announcements mailing list > Dbp...@li... > https://lists.sourceforge.net/lists/listinfo/dbpedia-announcements -- Regards, Kingsley Idehen Founder & CEO OpenLink Software Company Web: http://www.openlinksw.com Personal Weblog 1: http://kidehen.blogspot.com Personal Weblog 2: http://www.openlinksw.com/blog/~kidehen Twitter Profile: https://twitter.com/kidehen Google+ Profile: https://plus.google.com/+KingsleyIdehen/about LinkedIn Profile: http://www.linkedin.com/in/kidehen Personal WebID: http://kingsley.idehen.net/dataspace/person/kidehen#this |
From: maria t. <m....@la...> - 2014-10-28 15:34:09
|
Hello, Can someone please advise me on when is a good time to access Dbpedia resources? It seems that is under maintenance at the moment. Thanks, Maria |
From: Christian B. <ch...@bi...> - 2014-09-09 09:24:48
|
Hi all, we are happy to announce the release of DBpedia 2014. The most important improvements of the new release compared to DBpedia 3.9 are: 1. the new release is based on updated Wikipedia dumps dating from April / May 2014 (the 3.9 release was based on dumps from March / April 2013), leading to an overall increase of the number of things described in the English edition from 4.26 to 4.58 million things. 2. the DBpedia ontology is enlarged and the number of infobox to ontology mappings has risen, leading to richer and cleaner data. The English version of the DBpedia knowledge base currently describes 4.58 million things, out of which 4.22 million are classified in a consistent ontology (http://wiki.dbpedia.org/Ontology2014), including 1,445,000 persons, 735,000 places (including 478,000 populated places), 411,000 creative works (including 123,000 music albums, 87,000 films and 19,000 video games), 241,000 organizations (including 58,000 companies and 49,000 educational institutions), 251,000 species and 6,000 diseases. We provide localized versions of DBpedia in 125 languages. All these versions together describe 38.3 million things, out of which 23.8 million are localized descriptions of things that also exist in the English version of DBpedia. The full DBpedia data set features 38 million labels and abstracts in 125 different languages, 25.2 million links to images and 29.8 million links to external web pages; 80.9 million links to Wikipedia categories, and 41.2 million links to YAGO categories. DBpedia is connected with other Linked Datasets by around 50 million RDF links. Altogether the DBpedia 2014 release consists of 3 billion pieces of information (RDF triples) out of which 580 million were extracted from the English edition of Wikipedia, 2.46 billion were extracted from other language editions. Detailed statistics about the DBpedia data sets in 28 popular languages are provided at Dataset Statistics page (http://wiki.dbpedia.org/Datasets2014/DatasetStatistics). The main changes between DBpedia 3.9 and 2014 are described below. For additional, more detailed information please refer to the DBpedia Change Log (http://wiki.dbpedia.org/Changelog). 1. Enlarged Ontology The DBpedia community added new classes and properties to the DBpedia ontology via the mappings wiki. The DBpedia 2014 ontology encompasses 685 classes (DBpedia 3.9: 529) 1,079 object properties (DBpedia 3.9: 927) 1,600 datatype properties (DBpedia 3.9: 1,290) 116 specialized datatype properties (DBpedia 3.9: 116) 47 owl:equivalentClass and 35 owl:equivalentProperty mappings to http://schema.org 2. Additional Infobox to Ontology Mappings The editors community of the mappings wiki also defined many new mappings from Wikipedia templates to DBpedia classes. For the DBpedia 2014 extraction, we used 4,339 mappings (DBpedia 3.9: 3,177 mappings), which are distributed as follows over the languages covered in the release. English: 586 mappings Dutch: 469 mappings Serbian: 450 mappings Polish: 383 mappings German: 295 mappings Greek: 281 mappings French: 221 mappings Portuguese: 211 mappings Slovenian: 170 mappings Korean: 148 mappings Spanish: 137 mappings Italian: 125 mappings Belarusian: 125 mappings Hungarian: 111 mappings Turkish: 91 mappings Japanese: 81 mappings Czech: 66 mappings Bulgarian: 61 mappings Indonesian: 59 mappings Catalan: 52 mappings Arabic: 52 mappings Russian: 48 mappings Basque: 37 mappings Croatian: 36 mappings Irish: 17 mappings Wiki-Commons: 12 mappings Welsh: 7 mappings Bengali: 6 mappings Slovak: 2 Mappings 3. Extended Type System to cover Articles without Infobox Until the DBpedia 3.8 release, a concept was only assigned a type (like person or place) if the corresponding Wikipedia article contains an infobox indicating this type. Starting from the 3.9 release, we provide type statements for articles without infobox that are inferred based on the link structure within the DBpedia knowledge base using the algorithm described in Paulheim/Bizer 2014 (http://www.heikopaulheim.com/documents/ijswis_2014.pdf). For the new release, an improved version of the algorithm was run to produce type information for 400,000 things that were formerly not typed. A similar algorithm (presented in the same paper) was used to identify and remove potentially wrong statements from the knowledge base. 4. New and updated RDF Links into External Data Sources We updated the following RDF link sets pointing at other Linked Data sources: Freebase, Wikidata, Geonames and GADM. For an overview about all data sets that are interlinked from DBpedia please refer to http://wiki.dbpedia.org/Interlinking. *** Accessing the DBpedia 2014 Release *** You can download the new DBpedia datasets from http://wiki.dbpedia.org/Downloads. As usual, the new dataset is also available as Linked Data and via the DBpedia SPARQL endpoint at http://dbpedia.org/sparql. *** Credits *** Lots of thanks to 1. Daniel Fleischhacker and Volha Bryl (University of Mannheim, Germany) for improving the DBpedia extraction framework, for extracting the DBpedia 2014 data sets for all 125 languages, for generating the updated RDF links to external data sets, and for generating the statistics about the new release. 2. All editors that contributed to the DBpedia ontology mappings via the Mappings Wiki. 3. The whole DBpedia Internationalization Committee for pushing the DBpedia internationalization forward. 4. Dimitris Kontokostas (University of Leipzig) for improving the DBpedia extraction framework and loading the new release onto the DBpedia download server in Leipzig. 5. Heiko Paulheim (University of Mannheim, Germany) for re-running his algorithm to generate additional type statements for formerly untyped resources and identify and removed wrong statements. 6. Petar Ristoski (University of Mannheim, Germany) for generating the updated links pointing at the GADM database of Global Administrative Areas. Petar will also generate an updated release of DBpedia as Tables soon. 7. Aldo Gangemi (LIPN University, France & ISTC-CNR, Italy) for providing the links from DOLCE to DBpedia ontology. 8. Kingsley Idehen, Patrick van Kleef, and Mitko Iliev (all OpenLink Software) for loading the new data set into the Virtuoso instance that serves the Linked Data view and SPARQL endpoint. 9. OpenLink Software (http://www.openlinksw.com/) altogether for providing the server infrastructure for DBpedia. 10. Michael Moore (University of Waterloo, as an intern at the University of Mannheim) for implementing the anchor text extractor and and contribution to the statistics scripts. 11. Ali Ismayilov (University of Bonn) for implementing Wikidata extraction, on which the interlanguage link generation was based. 12. Gaurav Vaidya (University of Colorado Boulder) for implementing and running Wikimedia Commons extraction. 13. Andrea Di Menna, Jona Christopher Sahnwaldt, Julien Cojan, Julien Plu, Nilesh Chakraborty and others who contributed improvements to the DBpedia extraction framework via the source code repository on GitHub. 14. All GSoC mentors and students for working directly or indirectly on this release: https://github.com/dbpedia/extraction-framework/graphs/contributors The work on the DBpedia 2014 release was financially supported by the European Commission through the project LOD2 - Creating Knowledge out of Linked Data (http://lod2.eu/). More information about DBpedia is found at http://dbpedia.org/About as well as in the new overview article about the project available at http://wiki.dbpedia.org/Publications. Have fun with the new DBpedia 2014 release! Cheers, Daniel Fleischhacker, Volha Bryl, and Christian Bizer -- Prof. Dr. Christian Bizer Data and Web Science Group University of Mannheim, Germany ch...@in... www.bizer.de |
From: Christian B. <ch...@bi...> - 2013-09-23 10:44:52
|
Hi all, we are happy to announce the release of DBpedia 3.9. The most important improvements of the new release compared to DBpedia 3.8 are: 1. the new release is based on updated Wikipedia dumps dating from March / April 2013 (the 3.8 release was based on dumps from June 2012), leading to an overall increase in the number of concepts in the English edition from 3.7 to 4.0 million things. 2. the DBpedia ontology is enlarged and the number of infobox to ontology mappings has risen, leading to richer and cleaner concept descriptions. 3. we extended the DBpedia type system to also cover Wikipedia articles that do not contain an infobox. 4. we provide links pointing from DBpedia concepts to Wikidata concepts and updated the links pointing at YAGO concepts and classes, making it easier to integrate knowledge from these sources. The English version of the DBpedia knowledge base currently describes 4.0 million things, out of which 3.22 million are classified in a consistent Ontology, including 832,000 persons, 639,000 places (including 427,000 populated places), 372,000 creative works (including 116,000 music albums, 78,000 films and 18,500 video games), 209,000 organizations (including 49,000 companies and 45,000 educational institutions), 226,000 species and 5,600 diseases. We provide localized versions of DBpedia in 119 languages. All these versions together describe 24.9 million things, out of which 16.8 million overlap (are interlinked) with the concepts from the English DBpedia. The full DBpedia data set features labels and abstracts for 12.6 million unique things in 119 different languages; 24.6 million links to images and 27.6 million links to external web pages; 45.0 million external links into other RDF datasets, 67.0 million links to Wikipedia categories, and 41.2 million YAGO categories. Altogether the DBpedia 3.9 release consists of 2.46 billion pieces of information (RDF triples) out of which 470 million were extracted from the English edition of Wikipedia, 1.98 billion were extracted from other language editions, and about 45 million are links to external data sets. Detailed statistics about the DBpedia data sets in 24 popular languages are provided at http://wiki.dbpedia.org/Datasets39/DatasetStatistics The main changes between DBpedia 3.8 and 3.9 are described below. For additional, more detailed information please refer to the Change Log (http://wiki.dbpedia.org/Changelog) 1. Enlarged Ontology The DBpedia community added new classes and properties to the DBpedia ontology via the mappings wiki. The DBpedia 3.9 ontology encompasses 529 classes (DBpedia 3.8: 359) 927 object properties (DBpedia 3.8: 800) 1290 datatype properties (DBpedia 3.8: 859) 116 specialized datatype properties (DBpedia 3.8: 116) 46 owl:equivalentClass and 31 owl:equivalentProperty mappings to http://schema.org 2. Additional Infobox to Ontology Mappings The editors of the mappings wiki also defined many new mappings from Wikipedia templates to DBpedia classes. For the DBpedia 3.9 extraction, we used 3177 mappings (DBpedia 3.8: 2347 mappings), that are distributed as follows over the languages covered in the release. English: 431 mappings Polish: 382 mappings Dutch: 335 mappings German: 219 mappings Greek: 215 mappings Portuguese: 211 mappings Slovenian: 170 mappings French: 165 mappings Korean: 148 mappings Spanish: 137 mappings Hungarian: 111 mappings Turkish: 91 mappings Japanese: 72 mappings Czech: 66 mappings Italian: 62 mappings Bulgarian: 61 mappings Indonesian: 59 mappings Catalan: 52 mappings Arabic: 51 mappings Russian: 48 mappings Croatian: 36 mappings Basque: 32 mappings Irish: 17 mappings Bengali: 6 mappings 3. Extended Type System to cover Articles without Infobox Until the DBpedia 3.8 release, a concept was only assigned a type (like person or place) if the corresponding Wikipedia article contains an infobox indicating this type. The new 3.9 release now also contains type statements for articles without infobox that were inferred based on the link structure within the DBpedia knowledge base using the algorithm described in Paulheim/Bizer 2013 [1]. Applying the algorithm allowed us to provide type information for 440,000 concepts that were formerly not typed. A similar algorithm was also used to identify and remove potentially wrong links from the knowledge base. 4. New and updated RDF Links into External Data Sources We added RDF links to Wikidata and updated the following RDF link sets pointing at other Linked Data sources: YAGO, Freebase, Geonames, GADM and EUNIS. For an overview about all data sets that are interlinked from DBpedia please refer to http://wiki.dbpedia.org/Interlinking 5. New Find Related Concepts Service We offer a new service for finding resources that are related to a given DBpedia seed resource. More information about the service is found at http://wiki.dbpedia.org/FindRelated Accessing the DBpedia 3.9 Release: You can download the new DBpedia datasets from http://wiki.dbpedia.org/Downloads39 As usual, the dataset is also available as Linked Data and via the DBpedia SPARQL endpoint at http://dbpedia.org/sparql Lots of thanks to: * Jona Christopher Sahnwaldt (Freelancer funded by the University of Mannheim, Germany) for improving the DBpedia extraction framework, for extracting the DBpedia 3.9 data sets for all 119 languages, and for generating the updated RDF links to external data sets. * All editors that contributed to the DBpedia ontology mappings via the Mappings Wiki. * Heiko Paulheim (University of Mannheim, Germany) for inventing and implementing the algorithm to generate additional type statements for formerly untyped resources. * The whole Internationalization Committee for pushing the DBpedia internationalization forward. * Dimitris Kontokostas (University of Leipzig) for improving the DBpedia extraction framework and loading the new release onto the DBpedia download server in Leipzig. * Volha Bryl (University of Mannheim, Germany) for generating the statistics about the new release. * Petar Ristoski (University of Mannheim, Germany) for generating the updated links pointing at the GADM database of Global Administrative Areas. * Kingsley Idehen, Patrick van Kleef, and Mitko Iliev (all OpenLink Software) for loading the new data set into the Virtuoso instance that serves the Linked Data view and SPARQL endpoint. * OpenLink Software (http://www.openlinksw.com/) altogether for providing the server infrastructure for DBpedia. * Julien Cojan, Andrea Di Menna, Ahmed Ktob, Julien Plu, Jim Regan and others who contributed improvements to the DBpedia extraction framework via the source code repository on GitHub. The work on the DBpedia 3.9 release was financially supported by the European Commission through the project LOD2 - Creating Knowledge out of Linked Data (http://lod2.eu/). More information about DBpedia is found at http://dbpedia.org/About as well as in the new overview article [2] about the project. Have fun with the new DBpedia release! Cheers, Christian Bizer and Christopher Sahnwaldt [1] http://www.heikopaulheim.com/docs/iswc2013.pdf [2] http://svn.aksw.org/papers/2013/SWJ_DBpedia/public.pdf |
From: Kingsley I. <ki...@op...> - 2013-06-07 12:59:27
|
On 6/7/13 8:22 AM, fabio valsecchi wrote: > Hi, > > i'm developing an application that makes some ajax requests to the sparql endpoint (dbpedia.org/sparql). The problem is the following, until yesterday I used this code for retrieving the data: > > query = 'http://dbpedia.org/sparql?default-graph-uri=http%3A%2F%2Fdbpedia.org&query=........&format=text%2Fhtml&timeout=0&debug=on&output=json' > > ajaxReq = ($.ajax({ > url: query, > dataType: "json", > success: function (data) { > ... > }, > error: function(e) { > ... > } > })); > > > Using this code and in particular the query above i succeded in ectracting all the data i wanted. Now it happens that my ajax call does not work yet even if is not changed. All the request to the endpoint sparql end up in an error but if i alert the query, executing it from the endpoint it works. > > Any idea about the problem? The problem is that the instance has been upgraded at the DBMS level. Clearly the CORS enabling feature hasn't been enabled. I'll have it looked into. Kingsley > > > ------------------------------------------------------------------------------ > How ServiceNow helps IT people transform IT departments: > 1. A cloud service to automate IT design, transition and operations > 2. Dashboards that offer high-level views of enterprise services > 3. A single system of record for all IT processes > http://p.sf.net/sfu/servicenow-d2d-j > _______________________________________________ > Dbpedia-announcements mailing list > Dbp...@li... > https://lists.sourceforge.net/lists/listinfo/dbpedia-announcements > > -- Regards, Kingsley Idehen Founder & CEO OpenLink Software Company Web: http://www.openlinksw.com Personal Weblog: http://www.openlinksw.com/blog/~kidehen Twitter/Identi.ca handle: @kidehen Google+ Profile: https://plus.google.com/112399767740508618350/about LinkedIn Profile: http://www.linkedin.com/in/kidehen |
From: fabio v. <fab...@gm...> - 2013-06-07 12:22:27
|
Hi, i'm developing an application that makes some ajax requests to the sparql endpoint (dbpedia.org/sparql). The problem is the following, until yesterday I used this code for retrieving the data: query = 'http://dbpedia.org/sparql?default-graph-uri=http%3A%2F%2Fdbpedia.org&query=........&format=text%2Fhtml&timeout=0&debug=on&output=json' ajaxReq = ($.ajax({ url: query, dataType: "json", success: function (data) { ... }, error: function(e) { ... } })); Using this code and in particular the query above i succeded in ectracting all the data i wanted. Now it happens that my ajax call does not work yet even if is not changed. All the request to the endpoint sparql end up in an error but if i alert the query, executing it from the endpoint it works. Any idea about the problem? |
From: Chris B. <ch...@bi...> - 2012-08-06 14:19:09
|
Hi all, we are happy to announce the release of DBpedia 3.8. The most important improvements of the new release compared to DBpedia 3.7 are: 1. the DBpedia 3.8 release is based on updated Wikipedia dumps dating from late May/early June 2012. 2. the DBpedia ontology is enlarged and the number of infobox to ontology mappings has risen. 3. the DBpedia internationalization has progressed and we now provide localized versions of DBpedia in even more languages. The English version of the DBpedia 3.8 knowledge base describes 3.77 million things, out of which 2.35 million are classified in a consistent Ontology, including 764,000 persons, 573,000 places (including 387,000 populated places), 333,000 creative works (including 112,000 music albums, 72,000 films and 18,000 video games), 192,000 organizations (including 45,000 companies and 42,000 educational institutions), 202,000 species and 5,500 diseases. We provide localized versions of DBpedia in 111 languages. All these versions together describe 20.8 million things, out of which 10.5 mio overlap (are interlinked) with concepts from the English DBpedia. The full DBpedia data set features labels and abstracts for 10.3 million unique things in 111 different languages; 8.0 million links to images and 24.4 million HTML links to external web pages; 27.2 million data links into external RDF data sets, 55.8 million links to Wikipedia categories, and 8.2 million YAGO categories. The dataset consists of 1.89 billion pieces of information (RDF triples) out of which 400 million were extracted from the English edition of Wikipedia, 1.46 billion were extracted from other language editions, and about 27 million are data links into external RDF data sets. The main changes between DBpedia 3.7 and 3.8 are described below: 1. Enlarged Ontology The DBpedia community added many new classes and properties on the mappings wiki. The DBpedia 3.8 ontology encompasses 359 classes (DBpedia 3.7: 319) 800 object properties (DBpedia 3.7: 750) 859 datatype properties (DBpedia 3.7: 791) 116 specialized datatype properties (DBpedia 3.7: 102) 45 owl:equivalentClass and 31 owl:equivalentProperty mappings to http://schema.org 2. Additional Infobox to Ontology Mappings The editors of the mappings wiki also defined many new mappings from Wikipedia templates to DBpedia classes. For the DBpedia 3.8 extraction, we used 2347 mappings, among them Polish: 382 mappings English: 345 mappings German: 211 mappings Portuguese: 207 mappings Greek: 180 mappings Slovenian: 170 mappings Korean: 146 mappings Hungarian: 111 mappings Spanish: 107 mappings Turkish: 91 mappings Czech: 66 mappings Bulgarian: 61 mappings Catalan: 52 mappings Arabic: 51 mappings 3. New local DBpedia Chapters We are also happy to see the number of local DBpedia chapters in different countries rising. Since the 3.7 DBpedia release we welcomed the French, Italian and Japanese Chapters. In addition, we expect the Dutch DBpedia chapter to go online during the next months (in cooperation with http://bibliotheek.nl/). The DBpedia chapters provide local SPARQL endpoints and dereferencable URIs for the DBpedia data in their corresponding language. The DBpedia Internationalization page provides an overview of the current state of the DBpedia Internationalization effort. 4. New and updated RDF Links into External Data Sources We have added new RDF links pointing at resources in the following Linked Data sources: Amsterdam Museum, BBC Wildlife Finder, CORDIS, DBTune, Eurostat (Linked Statistics), GADM, LinkedGeoData, OpenEI (Open Energy Info). In addition, we have updated many of the existing RDF links pointing at other Linked Data sources. 5. New Wiktionary2RDF Extractor We developed a DBpedia extractor, that is configurable for any Wiktionary edition. It generates an comprehensive ontology about languages for use as a semantic lexical resource in linguistics. The data currently includes language, part of speech, senses with definitions, synonyms, taxonomies (hyponyms, hyperonyms, synonyms, antonyms) and translations for each lexical word. It furthermore is hosted as Linked Data and can serve as a central linking hub for LOD in linguistics. Currently available languages are English, German, French, Russian. In the next weeks we plan to add Vietnamese and Arabic. The goal is to allow the addition of languages just by configuration without the need of programming skills, enabling collaboration as in the Mappings Wiki. For more information visit http://wiktionary.dbpedia.org/ 6. Improvements to the Data Extraction Framework Additionally to N-Triples and N-Quads, the framework was extended to write triple files in Turtle format Extraction steps that looked for links between different Wikipedia editions were replaced by more powerful post-processing scripts Preparation time and effort for abstract extraction is minimized, extraction time is reduced to a few milliseconds per page To save file system space, the framework can compress DBpedia triple files while writing and decompress Wikipedia XML dump files while reading Using some bit twiddling, we can now load all ~200 million inter-language links into a few GB of RAM and analyze them Users can download ontology and mappings from mappings wiki and store them in files to avoid downloading them for each extraction, which takes a lot of time and makes extraction results less reproducible We now use IRIs for all languages except English, which uses URIs for backwards compatibility We now resolve redirects in all datasets where the objects URIs are DBpedia resources We check that extracted dates are valid (e.g. February never has 30 days) and its format is valid according to its XML Schema type, e.g. xsd:gYearMonth We improved the removal of HTML character references from the abstracts When extracting raw infobox properties, we make sure that predicate URI can be used in RDF/XML by appending an underscore if necessary Page IDs and Revision IDs datasets now use the DBpedia resource as subject URI, not the Wikipedia page URL We use foaf:isPrimaryTopicOf instead of foaf:page for the link from DBpedia resource to Wikipedia page New inter-language link datasets for all languages Accessing the DBpedia 3.8 Release You can download the new DBpedia dataset from http://dbpedia.org/Downloads38. As usual, the dataset is also available as Linked Data and via the DBpedia SPARQL endpoint at http://dbpedia.org/sparql Credits Lots of thanks to Jona Christopher Sahnwaldt (Freie Universität Berlin, Germany) for improving the DBpedia extraction framework and for extracting the DBpedia 3.8 data sets. Dimitris Kontokostas (Aristotle University of Thessaloniki, Greece) for implementing the language generalizations to the extraction framework. Uli Zellbeck and Anja Jentzsch (Freie Universität Berlin, Germany) for generating the new and updated RDF links to external datasets using the Silk interlinking framework. Jonas Brekle (Universität Leipzig, Germany) and Sebastian Hellmann (Universität Leipzig, Germany) for their work on the new Wikionary2RDF extractor. All editors that contributed to the DBpedia ontology mappings via the Mappings Wiki. The whole Internationalization Committee for pushing the DBpedia internationalization forward. Kingsley Idehen and Patrick van Kleef (both OpenLink Software) for loading the dataset into the Virtuoso instance that serves the Linked Data view and SPARQL endpoint. OpenLink Software (http://www.openlinksw.com/) altogether for providing the server infrastructure for DBpedia. The work on the DBpedia 3.8 release was financially supported by the European Commission through the projects LOD2 - Creating Knowledge out of Linked Data (http://lod2.eu/, improvements to the extraction framework) and LATC - LOD Around the Clock (http://latc-project.eu/, creation of external RDF links). More information about DBpedia is found at http://dbpedia.org/About Have fun with the new DBpedia release! Cheers, Chris Bizer |
From: Mohamed M. <mo...@in...> - 2012-03-07 14:06:09
|
Dear all, the AKSW [1] group is pleased to announce that a newer version of the synchronization tool of DBpedia-Live is now available for download. The synchronization tool is required to keep the other DBpedia-Live mirrors always in sync with our DBpedia-Live endpoint available at [2]. The new version has fixes for the issues reported in version 1.0, such as deleting the old triples before inserting the new ones. The tool is available for download at "http://sourceforge.net/projects/dbpintegrator/files/" [1] http://aksw.org [2] http://live.dbpedia.org/sparql -- Kind Regards Mohamed Morsey Department of Computer Science University of Leipzig |
From: Jens L. <le...@in...> - 2011-06-24 11:23:15
|
Dear all, the AKSW [1] group is pleased to announce the official release of DBpedia Live [2]. The main objective of DBpedia is to extract structured information from Wikipedia, convert it into RDF, and make it freely available on the Web. In a nutshell, DBpedia is the Semantic Web mirror of Wikipedia. Wikipedia users constantly revise Wikipedia articles with updates happening almost each second. Hence, data stored in the official DBpedia endpoint can quickly become outdated, and Wikipedia articles need to be re-extracted. DBpedia Live enables such a continuous synchronization between DBpedia and Wikipedia. The DBpedia Live framework has the following new features: 1. Migration from the previous PHP framework to the new Java/Scala DBpedia framework. 2. Support of clean abstract extraction. 3. Automatic reprocessing of all pages affected by a schema mapping change at http://mappings.dbpedia.org. 4. Automatic reprocessing of pages that are not changed for more than one month. The main objective of that feature is to that any change in the DBpedia framework, e.g. addition/change of an extractor, will eventually affect all extracted resources. It also serves as fallback for technical problems in Wikipedia or the update stream. 5. Publication of all changesets. 6. Provision of a tool to enable other DBpedia mirrors to be in synchronization with our DBpedia Live endpoint. The tool continuously downloads changesets and performs changes in a specified triple store accordingly. Important Links: * SPARQL-endpoint: http://live.dbpedia.org/sparql * DBpedia-Live Statistics: http://live.dbpedia.org/livestats * Changesets: http://live.dbpedia.org/liveupdates * Sourcecode: http://dbpedia.hg.sourceforge.net/hgweb/dbpedia/extraction_framework * Synchronization Tool: http://sourceforge.net/projects/dbpintegrator/files/ Thanks a lot to Mohamed Morsey, who implemented this version of DBpedia Live as well as to Sebastian Hellmann and Claus Stadler who worked on its predecessor. We also thank our partners at the FU Berlin and OpenLink as well as the LOD2 project [3] for their support. Kind regards, Jens [1] http://aksw.org [2] http://live.dbpedia.org [3] http://lod2.eu -- Dr. Jens Lehmann AKSW/MOLE Group, Department of Computer Science, University of Leipzig Homepage: http://www.jens-lehmann.org GPG Key: http://jens-lehmann.org/jens_lehmann.asc |
From: Chris B. <ch...@bi...> - 2011-01-19 15:56:19
|
Hi Antoine, > I was wondering: do you keep older versions of the DBpedia datasets? we do and they are all reachable from the DBpedia download page at http://wiki.dbpedia.org/Downloads36 Just click on the "Older Versions" links. Cheers, Chris > -----Ursprüngliche Nachricht----- > Von: pub...@w3... [mailto:pub...@w3...] Im > Auftrag von Antoine Zimmermann > Gesendet: Mittwoch, 19. Januar 2011 16:49 > An: Chris Bizer > Cc: dbp...@li...; dbpedia- > dis...@li...; 'Semantic Web'; 'public-lod' > Betreff: Re: ANN: DBpedia 3.6 released > > Dear Chris and the DBpedia crew, > > > As always, a new version of DBpedia is very good news for the Semantic > Web and Linked Data. > > I was wondering: do you keep older versions of the DBpedia datasets? > If yes, would you allow people to download older versions for research > purposes? > > This would be very useful in order to study the dynamics of RDF data, or > the dynamics of DBpedia itself. There are already papers on the dynamics > of Wikipedia but I am not aware of corresponding work for DBPedia. > > > Regards, > AZ. > > Le 17/01/2011 14:10, Chris Bizer a écrit : > > Hi all, > > > > we are happy to announce the release of DBpedia 3.6. The new release is > > based on Wikipedia dumps dating from October/November 2010. > > > > The new DBpedia dataset describes more than 3.5 million things, of which > > 1.67 million are classified in a consistent ontology, including 364,000 > > persons, 462,000 places, 99,000 music albums, 54,000 films, 16,500 video > > games, 148,000 organizations, 148,000 species and 5,200 diseases. > > > > The DBpedia dataset features labels and abstracts for 3.5 million > things in > > up to 97 different languages; 1,850,000 links to images and 5,900,000 > links > > to external web pages; 6,500,000 external links into other RDF > datasets, and > > 632,000 Wikipedia categories. > > > > The dataset consists of 672 million pieces of information (RDF > triples) out > > of which 286 million were extracted from the English edition of Wikipedia > > and 386 million were extracted from other language editions and links to > > external datasets. > > > > Along with the release of the new datasets, we are happy to announce the > > initial release of the DBpedia MappingTool > > (http://mappings.dbpedia.org/index.php/MappingTool): a graphical user > > interface to support the community in creating and editing mappings > as well > > as the ontology. > > > > The new release provides the following improvements and changes > compared to > > the DBpedia 3.5.1 release: > > > > 1. Improved DBpedia Ontology as well as improved Infobox mappings > using > > http://mappings.dbpedia.org/. > > > > Furthermore, there are now also mappings in languages other than > English. > > These improvements are largely due to collective work by the community. > > There are 13.8 million RDF statements based on mappings (11.1 million in > > version 3.5.1). All this data is in the /ontology/ namespace. Note > that this > > data is of much higher quality than the Raw Infobox data in the > /property/ > > namespace. > > > > Statistics of the mappings wiki on the date of release 3.6: > > > > + Mappings: > > + English: 315 Infobox mappings (covers 1124 templates including > > redirects) > > + Greek: 137 Infobox mappings (covers 192 templates including > > redirects) > > + Hungarian: 111 Infobox mappings (covers 151 templates including > > redirects) > > + Croatian: 36 Infobox mappings (covers 67 templates including > > redirects) > > + German: 9 Infobox mappings > > + Slovenian: 4 Infobox mappings > > + Ontology: > > + 272 classes > > + Properties: > > + 629 object properties > > + 706 datatype properties (they are all in the /datatype/ > namespace) > > > > 2. Some commonly used property names changed. > > > > + Please see http://dbpedia.org/ChangeLog and > > http://dbpedia.org/Datasets/Properties to know which relations > changed and > > update your applications accordingly! > > > > 3. New Datatypes for increased quality in mapping-based properties > > > > + xsd:positiveInteger, xsd:nonNegativeInteger, xsd:nonPositiveInteger, > > xsd:negativeInteger > > > > 4. Improved parsing coverage. > > > > + Parsing of lists of elements in Infobox property values that > improves the > > completeness of extracted facts. > > + Method to deal with missing repeated links in Infoboxes that do appear > > somewhere else on the page. > > + Flag templates are parsed. > > + Various improvements on internationalization. > > > > 5. Improved recognition of > > > > + Wikipedia namespace identifiers. > > + Wikipedia language codes. > > + Category hierarchies. > > > > 6. Disambiguation links for acronyms (all upper-case title) are now > > extracted (for example, Kilobyte and Knowledge_base for "KB"): > > > > + Wikilinks consisting of multiple words: If the starting letters of the > > words appear in correct order (with possible gaps) and cover all acronym > > letters. > > + Wikilinks consisting of a single word: If the case-insensitive longest > > common subsequence with the acronym is equal to the acronym. > > > > 7. Encoding (bugfixes): > > > > + The new datasets support the complete range of Unicode code points > (up to > > 0x10ffff). 16-bit code points start with '\u', code points larger than > > 16-bits start with '\U'. > > + Commas and ampersands do not get encoded anymore in URIs. Please > see > > http://dbpedia.org/URIencoding for an explanation regarding the > DBpedia URI > > encoding scheme. > > > > 8. Extended Datasets: > > > > + Thanks to Johannes Hoffart (Max-Planck-Institut für Informatik) for > > contributing links to YAGO2. > > + Freebase links have been updated. They now refer to mids > > (http://wiki.freebase.com/wiki/Machine_ID) because guids have been > > deprecated. > > > > You can download the new DBpedia dataset from > http://dbpedia.org/Downloads36 > > > > As usual, the dataset is also available as Linked Data and via the > DBpedia > > SPARQL endpoint at http://dbpedia.org/sparql > > > > Lots of thanks to: > > > > + All editors that contributed to the DBpedia ontology mappings via the > > Mappings Wiki. > > + Max Jakob (Freie Universität Berlin, Germany) for improving the DBpedia > > extraction framework and for extracting the new datasets. > > + Robert Isele and Anja Jentzsch (both Freie Universität Berlin, Germany) > > for helping Max with their expertise on the extraction framework. > > + Paul Kreis (Freie Universität Berlin, Germany) for analyzing the > DBpedia > > data of the previous release and suggesting ways to increase quality and > > quantity. Some results of his work were implemented in this release. > > + Dimitris Kontokostas (Aristotle University of Thessaloniki, > Greece), Jimmy > > O'Regan (Eolaistriu Technologies, Ireland), José Paulo Leal > (University of > > Porto, Portugal) for providing patches to improve the extraction > framework. > > + Jens Lehmann and Sören Auer (both Universität Leipzig, Germany) for > > providing the new dataset via the DBpedia download server at Universität > > Leipzig. > > + Kingsley Idehen and Mitko Iliev (both OpenLink Software) for > loading the > > dataset into the Virtuoso instance that serves the Linked Data view and > > SPARQL endpoint. OpenLink Software (http://www.openlinksw.com/) > altogether > > for providing the server infrastructure for DBpedia. > > > > The work on the new release was financially supported by: > > > > + Neofonie GmbH, a Berlin-based company offering leading technologies > in the > > area of Web search, social media and mobile applications > > (http://www.neofonie.de/). > > + The European Commission through the project LOD2 - Creating > Knowledge out > > of Linked Data (http://lod2.eu/). > > + Vulcan Inc. as part of its Project Halo (http://www.projecthalo.com/). > > Vulcan Inc. creates and advances a variety of world-class endeavors > and high > > impact initiatives that change and improve the way we live, learn, do > > business (http://www.vulcan.com/). > > > > More information about DBpedia is found at http://dbpedia.org/About > > > > Have fun with the new dataset! > > > > The whole DBpedia team also congratulates Wikipedia to its 10th Birthday > > which was this weekend! > > > > Cheers, > > > > Chris Bizer > > > > > > -- > > Prof. Dr. Christian Bizer > > Web-based Systems Group > > Freie Universität Berlin > > +49 30 838 55509 > > http://www.bizer.de > > ch...@bi... > > > > > > > > > -- > Antoine Zimmermann > Researcher at: > Laboratoire d'InfoRmatique en Image et Systèmes d'information > Database Group > 7 Avenue Jean Capelle > 69621 Villeurbanne Cedex > France > Lecturer at: > Institut National des Sciences Appliquées de Lyon > 20 Avenue Albert Einstein > 69621 Villeurbanne Cedex > France > ant...@in... > http://zimmer.aprilfoolsreview.com/ |
From: Chris B. <ch...@bi...> - 2011-01-17 13:08:41
|
Hi all, we are happy to announce the release of DBpedia 3.6. The new release is based on Wikipedia dumps dating from October/November 2010. The new DBpedia dataset describes more than 3.5 million things, of which 1.67 million are classified in a consistent ontology, including 364,000 persons, 462,000 places, 99,000 music albums, 54,000 films, 16,500 video games, 148,000 organizations, 148,000 species and 5,200 diseases. The DBpedia dataset features labels and abstracts for 3.5 million things in up to 97 different languages; 1,850,000 links to images and 5,900,000 links to external web pages; 6,500,000 external links into other RDF datasets, and 632,000 Wikipedia categories. The dataset consists of 672 million pieces of information (RDF triples) out of which 286 million were extracted from the English edition of Wikipedia and 386 million were extracted from other language editions and links to external datasets. Along with the release of the new datasets, we are happy to announce the initial release of the DBpedia MappingTool (http://mappings.dbpedia.org/index.php/MappingTool): a graphical user interface to support the community in creating and editing mappings as well as the ontology. The new release provides the following improvements and changes compared to the DBpedia 3.5.1 release: 1. Improved DBpedia Ontology as well as improved Infobox mappings using http://mappings.dbpedia.org/. Furthermore, there are now also mappings in languages other than English. These improvements are largely due to collective work by the community. There are 13.8 million RDF statements based on mappings (11.1 million in version 3.5.1). All this data is in the /ontology/ namespace. Note that this data is of much higher quality than the Raw Infobox data in the /property/ namespace. Statistics of the mappings wiki on the date of release 3.6: + Mappings: + English: 315 Infobox mappings (covers 1124 templates including redirects) + Greek: 137 Infobox mappings (covers 192 templates including redirects) + Hungarian: 111 Infobox mappings (covers 151 templates including redirects) + Croatian: 36 Infobox mappings (covers 67 templates including redirects) + German: 9 Infobox mappings + Slovenian: 4 Infobox mappings + Ontology: + 272 classes + Properties: + 629 object properties + 706 datatype properties (they are all in the /datatype/ namespace) 2. Some commonly used property names changed. + Please see http://dbpedia.org/ChangeLog and http://dbpedia.org/Datasets/Properties to know which relations changed and update your applications accordingly! 3. New Datatypes for increased quality in mapping-based properties + xsd:positiveInteger, xsd:nonNegativeInteger, xsd:nonPositiveInteger, xsd:negativeInteger 4. Improved parsing coverage. + Parsing of lists of elements in Infobox property values that improves the completeness of extracted facts. + Method to deal with missing repeated links in Infoboxes that do appear somewhere else on the page. + Flag templates are parsed. + Various improvements on internationalization. 5. Improved recognition of + Wikipedia namespace identifiers. + Wikipedia language codes. + Category hierarchies. 6. Disambiguation links for acronyms (all upper-case title) are now extracted (for example, Kilobyte and Knowledge_base for "KB"): + Wikilinks consisting of multiple words: If the starting letters of the words appear in correct order (with possible gaps) and cover all acronym letters. + Wikilinks consisting of a single word: If the case-insensitive longest common subsequence with the acronym is equal to the acronym. 7. Encoding (bugfixes): + The new datasets support the complete range of Unicode code points (up to 0x10ffff). 16-bit code points start with '\u', code points larger than 16-bits start with '\U'. + Commas and ampersands do not get encoded anymore in URIs. Please see http://dbpedia.org/URIencoding for an explanation regarding the DBpedia URI encoding scheme. 8. Extended Datasets: + Thanks to Johannes Hoffart (Max-Planck-Institut für Informatik) for contributing links to YAGO2. + Freebase links have been updated. They now refer to mids (http://wiki.freebase.com/wiki/Machine_ID) because guids have been deprecated. You can download the new DBpedia dataset from http://dbpedia.org/Downloads36 As usual, the dataset is also available as Linked Data and via the DBpedia SPARQL endpoint at http://dbpedia.org/sparql Lots of thanks to: + All editors that contributed to the DBpedia ontology mappings via the Mappings Wiki. + Max Jakob (Freie Universität Berlin, Germany) for improving the DBpedia extraction framework and for extracting the new datasets. + Robert Isele and Anja Jentzsch (both Freie Universität Berlin, Germany) for helping Max with their expertise on the extraction framework. + Paul Kreis (Freie Universität Berlin, Germany) for analyzing the DBpedia data of the previous release and suggesting ways to increase quality and quantity. Some results of his work were implemented in this release. + Dimitris Kontokostas (Aristotle University of Thessaloniki, Greece), Jimmy O'Regan (Eolaistriu Technologies, Ireland), José Paulo Leal (University of Porto, Portugal) for providing patches to improve the extraction framework. + Jens Lehmann and Sören Auer (both Universität Leipzig, Germany) for providing the new dataset via the DBpedia download server at Universität Leipzig. + Kingsley Idehen and Mitko Iliev (both OpenLink Software) for loading the dataset into the Virtuoso instance that serves the Linked Data view and SPARQL endpoint. OpenLink Software (http://www.openlinksw.com/) altogether for providing the server infrastructure for DBpedia. The work on the new release was financially supported by: + Neofonie GmbH, a Berlin-based company offering leading technologies in the area of Web search, social media and mobile applications (http://www.neofonie.de/). + The European Commission through the project LOD2 - Creating Knowledge out of Linked Data (http://lod2.eu/). + Vulcan Inc. as part of its Project Halo (http://www.projecthalo.com/). Vulcan Inc. creates and advances a variety of world-class endeavors and high impact initiatives that change and improve the way we live, learn, do business (http://www.vulcan.com/). More information about DBpedia is found at http://dbpedia.org/About Have fun with the new dataset! The whole DBpedia team also congratulates Wikipedia to its 10th Birthday which was this weekend! Cheers, Chris Bizer -- Prof. Dr. Christian Bizer Web-based Systems Group Freie Universität Berlin +49 30 838 55509 http://www.bizer.de ch...@bi... |
From: Kingsley I. <ki...@op...> - 2010-04-16 19:00:25
|
ba...@go... wrote: > On Wed, 14 Apr 2010 19:48:53 +0300, Kingsley Idehen > <ki...@op...> wrote: > > > ba...@go... wrote: > >> > On Tue, 2010-04-13 at 21:58 +0300, ba...@go... wrote: > >> > >> >> A fact of my experience since many years: > >> >> The homepage of my grandma is better accessible than the flagship(!) > >> >> of 'linked data' dbpedia.org... > >> >> > >> >> Someone who has used the endpoint dbpedia.org/sparql intensively > >> >> knows what i mean: > >> >> > >> >> After one or two hours or so, it hangs, i try dbpedia.org with FFox, > >> >> Opera, IE, >> it hangs also, after 5 minutes i try dbpedia.org, > >> >> i see the page, for dbpedia.org/sparql i put my simple query again, > >> >> it is ok. > >> >> > >> >> Since years it is the same story in the same rhythm. > >> > >> > On Wed, 14 Apr 2010 13:21:07 +0300, > >> > Ivan Mikhailov <imi...@op...> wrote: > >> > > >> > They say, the everlasting problem for professional cosmetics is > >> growing > >> > quality of optics and media used for movies, celebrities should > >> continue > >> > to look perfect. But I can bet you've never paid attention to that > >> fact > >> > while looking at the final result. > >> > > >> > Similarly, growing database size and growing hit rate and growing > >> > complexity of queries are not obviously visible from outside, but > turn > >> > the hosting into a race. We're improving the underlaying RDBMS as > fast > >> > as we only can just to prevent the service from total halt. One might > >> > wish to provide a better service on their own RDBMS and thus to > make a > >> > good advertisement, but nobody else want to do that _and_ can do > that, > >> > so we're alone under this load. > >> > > >> > If you wish, you may help us with hosting and/or equipment, or simply > >> > set up a mirror site and we would be glad to redirect some part of > >> load > >> > to your cluster. Even an inexpensive $20000 mirror would help to some > >> > degree. > >> > >> I understand you so: > >> > >> a.) There is enough know-how for running heavy SPARQL endpoints. > >> b.) But there is no enough money. > >> > >> I cannot help in the way you think, because we work in different areas. > >> > >> I can access different SPARQL endpoints with different lists of > queries > >> with a mouseclick under a single user-interface (surfing rdf-data with > >> 'similar' ontology structure) and i can write comments to you or to > >> fitting mailing-lists as i did it yesterday, that's all, sorry, > >> > >> thanks, baran. > >> > >> PS: Also thanks to Kingsley Idehen for interpreting 'HP of grandma' as > >> a friendly metaphor. > > Baran, > > We solve problems. > > What's your problem, be as clear as possible. Baran, > > My first posting (see above on top) was for me clear, in 3-4 months i > promise to write here if there is 'for me' a sustainable enhancement > or not, it is very easy to fix it over time... > > Now I notice for example <http://dbpedia.org/property/abstract> must > suddenly be > > <http://dbpedia.org/ontology/abstract>, otherwise no response... Ah! Certainly a documentation issue re. changes. This should certainly be communicated, absolutely! > > ok, why not, if it is for a better ontology structure, i adapt with my > client-side stuff and anticipate other unexpected changes. > > But i have also other detailed questions for example about > literals-indexing (i'm not sure whether this is my problem) or simple > inferences like owl:TransitiveProperty (from Skos in dbpedia, or not?) > etc... Interesting you bring this up as this is one of those SPARQL-Exts. issues, yes Virtuoso handles Transitivity, but you need to know how to enable its Inference Rules and Backward Chaining Reasoner. In this case, the DBpedia [1] or Virtuoso forums [2] will do. > > Is there an appropriate public(!) mailing-list for all who have > generally interest at similar things and (last not least) 'include the > DBpedia instance and SPARQL endpoint as being part of what constitutes > the DBpedia Team or DBpedia Project'? Links: 1. https://lists.sourceforge.net/lists/listinfo/dbpedia-discussion -- DBpedia mailing list 2. http://sourceforge.net/mailarchive/forum.php?forum_name=virtuoso-users -- User List (most active and covers user and developer issues) 3. http://virtuoso.openlinksw.com/dataspace/dav/wiki/Main/VOSMailingLists -- General Mailing List Info Page 4. http://www.mail-archive.com/dbp...@li.../msg01208.html -- old post about Transitivity and DBpedia 5. http://sourceforge.net/mailarchive/message.php?msg_name=4B72C32E.8050400%40openlinksw.com -- an old post using Relationship Ontology against DBpedia 6. http://bit.ly/cvAC9u -- Collection of examples that use DBpedia for a variety of queries that involves Transtivity etc. > > thanks, baran. > -- > Using Opera's revolutionary e-mail client: http://www.opera.com/mail/ [2] -- Regards, Kingsley Idehen President & CEO OpenLink Software Web: http://www.openlinksw.com Weblog: http://www.openlinksw.com/blog/~kidehen Twitter/Identi.ca: kidehen |