iBet uBet web content aggregator. Adding the entire web to your favor.
iBet uBet web content aggregator. Adding the entire web to your favor.



Link to original content: https://api.crossref.org/works/10.1609/AAAI.V38I16.29730
{"status":"ok","message-type":"work","message-version":"1.0.0","message":{"indexed":{"date-parts":[[2024,3,26]],"date-time":"2024-03-26T01:56:30Z","timestamp":1711418190669},"reference-count":0,"publisher":"Association for the Advancement of Artificial Intelligence (AAAI)","issue":"16","content-domain":{"domain":[],"crossmark-restriction":false},"short-container-title":["AAAI"],"abstract":"Event extraction is an important task in natural language processing that focuses on mining event-related information from unstructured text. Despite considerable advancements, it is still challenging to achieve satisfactory performance in this task, and issues like data scarcity and imbalance obstruct progress. In this paper, we introduce an innovative approach where we employ Large Language Models (LLMs) as expert annotators for event extraction. We strategically include sample data from the training dataset in the prompt as a reference, ensuring alignment between the data distribution of LLM-generated samples and that of the benchmark dataset. This enables us to craft an augmented dataset that complements existing benchmarks, alleviating the challenges of data imbalance and scarcity and thereby enhancing the performance of fine-tuned models. We conducted extensive experiments to validate the efficacy of our proposed method, and we believe that this approach holds great potential for propelling the development and application of more advanced and reliable event extraction systems in real-world scenarios.<\/jats:p>","DOI":"10.1609\/aaai.v38i16.29730","type":"journal-article","created":{"date-parts":[[2024,3,25]],"date-time":"2024-03-25T11:53:39Z","timestamp":1711367619000},"page":"17772-17780","source":"Crossref","is-referenced-by-count":0,"title":["Is a Large Language Model a Good Annotator for Event Extraction?"],"prefix":"10.1609","volume":"38","author":[{"given":"Ruirui","family":"Chen","sequence":"first","affiliation":[]},{"given":"Chengwei","family":"Qin","sequence":"additional","affiliation":[]},{"given":"Weifeng","family":"Jiang","sequence":"additional","affiliation":[]},{"given":"Dongkyu","family":"Choi","sequence":"additional","affiliation":[]}],"member":"9382","published-online":{"date-parts":[[2024,3,24]]},"container-title":["Proceedings of the AAAI Conference on Artificial Intelligence"],"original-title":[],"link":[{"URL":"https:\/\/ojs.aaai.org\/index.php\/AAAI\/article\/download\/29730\/31254","content-type":"application\/pdf","content-version":"vor","intended-application":"text-mining"},{"URL":"https:\/\/ojs.aaai.org\/index.php\/AAAI\/article\/download\/29730\/31255","content-type":"unspecified","content-version":"vor","intended-application":"text-mining"},{"URL":"https:\/\/ojs.aaai.org\/index.php\/AAAI\/article\/download\/29730\/31254","content-type":"unspecified","content-version":"vor","intended-application":"similarity-checking"}],"deposited":{"date-parts":[[2024,3,25]],"date-time":"2024-03-25T11:53:40Z","timestamp":1711367620000},"score":1,"resource":{"primary":{"URL":"https:\/\/ojs.aaai.org\/index.php\/AAAI\/article\/view\/29730"}},"subtitle":[],"short-title":[],"issued":{"date-parts":[[2024,3,24]]},"references-count":0,"journal-issue":{"issue":"16","published-online":{"date-parts":[[2024,3,25]]}},"URL":"https:\/\/doi.org\/10.1609\/aaai.v38i16.29730","relation":{},"ISSN":["2374-3468","2159-5399"],"issn-type":[{"value":"2374-3468","type":"electronic"},{"value":"2159-5399","type":"print"}],"subject":[],"published":{"date-parts":[[2024,3,24]]}}}