1 Introduction

The use of digital technologies in the twenty-first century is increasingly prevalent and has permeated all aspects of human livelihood. As such, individuals now require digital skills to better perform day to day activities in their personal and professional life. Digital literacy is considered the driver of individuals’ lives in this digital society, from carrying out personal tasks to education to being employed. Digital literacy is all about knowing how to use a computer, doing online search, critically retrieving information, evaluating it, and transforming it into knowledge (Buckingham, 2006). Digital innovations have changed the way tasks are performed, for example, traveling with ePass instead of cash, from using monochrome television to smart television, from simple mobile phones to smartphones for voice calls, from in-store shopping to online shopping (Radonovic et al., 2020; Hongthong & Temdee, 2018; Walton, 2016). As of July 2020, the global digital population has reached 4.57 billion users, encompassing 59% of the world’s population (statista.com, 2020). Several questions arise from the given statistics (Shields & Chugh, 2018): Do people know what it means to be digitally literate? Do people have relevant skills to use digital technology effectively? Do people know how to find and evaluate information? Do people have relevant knowledge on safely communicating information using digital platforms? Do people understand messages conveyed through digital images? Are people aware of the ethical and legal issues surrounding the use of digital technology and digital platform? These questions represent different aspects of the digital literacy competencies that need to be addressed while evaluating the digital literacy of individuals.

The fear of being left behind in the digital age has forced many individuals to have digital fluency (Meyers et al., 2013 as cited in Pangrazio, 2016). The impact of digital technology is that all individuals living in the digital society must have digital fluency and a high level of fluency for those who are part employed. Many sectors, such as the government sector (Nam, 2014; The Fijian Government, 2019; Wescott, 2015), the health sector (Farahat et al., 2018; Haluza & Jungwirth, 2018), the business sector (Dipartimento Di Ingegneria Gestionale, 2018; Grandhi & Chugh, 2012; Markham et al., 2020), and the education sector (Dolnicar et al., 2020; Sharma & Chief, 2016; Veletsianos, 2010) demonstrate the increasing role of technology-dependent operations in manifold ways. Recently, the dependency on digital technology and platform increased globally due to the COVID-19 pandemic (Beaunoyer et al., 2020). In the digital age, the need for digital literacy is worldwide, including the Pacific Islands. While there have been numerous studies conducted on digital literacy of individuals globally, for the Pacific Islands the notion of advocating digital literacy has just begun. Researchers have worked on different aspects of digital literacy such as redefining, conceptualizing the digital literacy framework, developing tools to measure digital literacy and training programmes to improve the digital literacy of individuals (Falloon, 2020; Singh et al., 2019; Hongthong & Temdee, 2018).In the Pacific Islands, only a few research on digital literacy have been conducted and as of now, there has been only one tool developed by the Pacific Islands researchers which measure the digital literacy competencies of the individuals (Reddy et al., 2020a).

The research study aims to present the newly designed and developed digital literacy intervention program (DLIP) for the individuals in the Pacific Islands and can be used by anyone globally. With over 62% of individuals in Pacific Islands on the digital platform (statista.com, 2020), individuals need to be well trained so that they have the necessary digital skills. According to (Sharma et al., 2018; Sharma & Chief, 2016), the HEI in the Pacific Islands should embrace digital literacy practices to deepen student learning experiences and narrow the digital divide gap in the Pacific Islands. There have been initiatives by the Pacific Islands government and researchers to advocate digital literacy for the Pacific Islands individuals. One such initiative is the development of a digital literacy scale known as digilitFj by the authors (Reddy et al., 2020a). The scale uses the six literacies of digital literacy to measure digital competencies of individuals. Each literacy has a set of competencies in which the individuals gauge themselves and based on their responses, their digital competencies are evaluated. The scale also indicates which specific skills the students are lacking. The DLIP complements digitliFj, and the complete package of digilitFJ and DLIP aim to measure and improve individuals' digital literacy skills”.. It consists of six online modules, each having its own set of theoretical component and a self-testing component. The modules have enhanced engagement and interaction through gamification and rewards such as online badges. There is also a certificate reward for individuals which can be attained by completing all the modules in the DLIP.

The DLIP has been tested for its reliability using the Kuder-Richardson-20 (KR-20) test and its validity using the spearman's correlation test in the Statistical Package for the Social Sciences (SPSS). According to Brien et al. (2019) and Berame et al. (2017), a reliability test is carried out to reveal the consistency of measurement and the KR-20 test is often used to determine the reliability of the measuring instrument. The construct validity evaluates the measuring ability of tool, that is, if it measures what it is supposed to measure. Construct validity is usually measured using the spearman's correlation test in SPSS (Incebacak & Ersoy, 2017). To measure the effectiveness of DLIP, the effect size was evaluated. The effect size is a quantitative measure of the study’s effect, the larger the effect the better the study (Gignac & Szodorai, 2016; Meyvis & Osselaer, 2018). Researchers state that Cohen’s d is an appropriate measure of the effect size and the Cohen’s d value of 0.2 means low effect, the value of 0.5 is medium effect and value of 0.8 is a huge effect of the intervention (Albers & Lakens, 2018; Gignac & Szodorai, 2016; White et al., 2020).

Although there has been digital literacy measuring instruments developed, digital literacy interventions have been a few. The current study introduces the newly developed digital literacy intervention that aims to improve individuals' digital literacy skills. The study explores the development and framework of the DLIP, its characteristics, the validity and reliability and the effectiveness of the DLIP. Being the very first of its kind, the DLIP intends to broaden an individual's knowledge of different digital competencies by translating theory into practice. This paper explores the design details, implementation, validity, reliability and effectiveness of the DLIP.

The paper outline is as follows; previous interventions developed to improve individuals' literacies, which is discussed in Section 2. Section 7 has a detailed description of the digital literacy intervention developed and the reliability and validity tests. The results section consists of the reliability and validity tests with related discussion.

2 Literature review

2.1 Digital literacy definition

Paul Glister first defined the term digital literacy in 1997 as the ability to understand and use information in multiple formats from a wide range of sources when it is presented via computers (Harvey, 2000; Reddy et al., 2020a; Toquero, 2020; Hongthong & Temdee, 2018; Walton, 2016). With the development, availability and accessibility of new technologies, the definition of digital literacy or digital competencies changed and according to researchers, the term will continue to change as technologies evolve (Polizzi, 2020; Radonovic et al., 2020; Liu et al., 2020; Singh et al., 2019; Neilson, 2018; Spante et al., 2018; Lynch, 2017). Some recent definitions of digital literacy are:

  1. i.

    ability to access, analyse, evaluate and produce messages in a variety of forms (Polizzi, 2020)

  2. ii.

    set of skills to access the internet, find, manage and edit digital information; join in communications, and otherwise engage with an online information and communication network (Falloon, 2020)

  3. iii.

    competencies in finding, processing, producing, and communicating information (Radovanovic et al., 2020)

  4. iv.

    combination of technical, procedural, cognitive and emotionally social skills (Liu et al., 2020)

  5. v.

    skills needed for the information society, that is, the use of electronic equipment for personal and social interactions and educational and business needs (Reddy et al., 2020a).

In the digital age, digital literacy is an essential component of 21st-century skills. Literature shows that digital literacy is an amicable amalgamation of selected literacies which form essential components of digital literacy and define digital competencies of individuals (Radonovic et al., 2020; Perdana et al., 2019; Chetty et al., 2017). Researchers have identified key literacies associated with digital literacy and these literacies were dependent on the nature of the studies that were carried out (Falloon, 2020). See Table 1. The commonalities between literacies exist hence are combined by some researchers on the user’s understanding of digital literacy.

Table 1  Selected examples of associated literacies to define digital literacy

The research study adopted Covello’s digital literacy framework to measure individuals' digital literacy competencies since the authors believe that the framework is most suitable for an educational setting. The six literacies in the framework cover all the appropriate skills needed to define a digitally literate individual in the twenty-first century. For the current study, a digitally literate individual has comprehensive knowledge of ICT technologies and a variety of technical and cognitive skills. Furthermore, the individual understands the relationship between technology and life-long journey, participates in the civic society and contributes to the informed society following all the ethical rules and regulations (Reddy et al., 2020a; Tomczyk, 2019).

2.2 Tools developed to measure and improve digital literacy competencies

The individuals living in the digital society are now actively using various digital technologies and digital platforms thus a measure of their digital competencies is necessary. Assessing digital competencies and taking necessary steps to remediate the lacking digital skills will ensure that the individuals participate ethically, authentically and more effectively (Asrizal et al., 2018; Tomczyk, 2019). Researchers have developed many tools to measure digital competencies of individuals using different frameworks of digital literacy. Prior research has shown that the tools developed for measuring digital literacy have mostly been piloted in the education sector. According to Aristizába et al. (2019), Kalolo (2019), Premuzic and Frankiewicz (2019) and Ken (2018) the education sector is responsible for preparing literate individuals for the work environment, who then contribute towards the development of their society and country. Moreover, digital technologies and digital platforms are driving education in the digital society and the education institutes have successfully integrated them to facilitate the teaching and learning process (Nikou & Aavakare, 2021; Aristizába et al., 2019; Premuzic & Frankiewicz, 2019; Ken, 2018). The integration of new digital technologies has introduced new digital tools for learning; hence students require new and relevant skills to use these technologies to be successful and attain life-long learning skills (Dneprovskaya et al., 2018; Ken, 2018). Game-based learning is one such tool that is prevalent and highly successful in the education system in the twenty-first century. There id a notion of games added to the learning to maek learning more engaging, interactive and effective (Cahill, 2020). The use of badges, leaderboards and quizzes are common forms of grading schemes used for the learning environment. Literature also shows that game-based learning offers motivating and engaging experiences, interactive learning environments and collaborative learning experiences (Anastasiadis et al., 2018; Arnab & Clarke, 2017; Huizengaa et al., 2017; Nikou & Aavakare, 2021). However, the growing acceptance of digital games in mainstream education requires the learners to have relevant skills to adapt to game-based learning. Also, facilitating effective learning behaviors during the gaming process remains an important and challenging issue therefore the facilitators and educators need to have relevant skills to properly guide their learners (Nikou & Aavakare, 2021; Sung & Hwang, 2017).Moreover, researchers also identified a digital fluency gap between the learners and the new digital-oriented education system (Pangrazio, 2016). Therefore, researchers developed measuring instruments to measure students' digital fluency and competencies (Dios et al., 2016; Üstündağ et al., 2017; Perdana et al., 2019). Some researchers further developed programs to improve the digital competencies of individuals (Molnar et al., 2020; Premuzic & Frankiewicz, 2019). Literature shows that the gap of digital fluency still existed as the framework for digital literacy was perceived differently by each researcher. The innovations in technology and the demand for new skills also contributed to the existing gap. Therefore, researchers are still developing new tools and remediation’s to close the gap of digital fluency.

2.3 Digital literacy measuring instruments developed

In the literature, several digital literacy measuring instruments have been developed. One can refer to the work by Perdana et al. (2016), Dios et al. (2016), Üstündağ et al. (2017), Liza and Andriyanti (2019) and Reddy et al. (2020a). Having a measuring instrument enables the stakeholders to measure the digital competencies of individuals and then formulate remediation strategies to improve the individual’s digital competencies. Literature shows that the majority of the digital literacy measuring instruments and remediation programmes have been developed and piloted on the students’. Researchers have stated that the students are the target group because they are the future of society and developing their digital skills will enable them to contribute effectively towards their society and nation-building (Molnar et al., 2020; Premuzic & Frankiewicz, 2019; Ken, 2018; Üstündağ et al., 2017).

According to Reddy et al. (2020a) and Asrizal et al. (2018) remediation instruments are important because they improve student learning by developing students’ knowledge, skills, attitudes and values. Then Premuzic and Frankiewicz (2019) state that remediation instruments promote efficiency and improve student performance, while (Molnar et al., 2020) state that remediation instruments directly impact students’ academic achievement. Being academically successful enables individuals to be employed, have stable employment, and have more employment opportunities (Premuzic & Frankiewicz, 2019).

Many organisations and educators have developed remediation instruments to improve the digital literacy of students and individuals, for example:

  1. i.

    Digital literacy programs in Sierra which include digital literacy vocational training and workshops—https://www.thevillagelink.org/digital-literacy

  2. ii.

    Internet Literacy Program (ILP) is an educational program- https://projectchild.ngo/our-program/internet-literacy-program/

  3. iii.

    Microsoft online digital literacy courses which consist of different modules of digital literacy—https://www.microsoft.com/en-us/digitalliteracy

  4. iv.

    Digital literacy education by Google for Education -https://applieddigitalskills.withgoogle.com/c/en/curriculum.html

  5. v.

    Digital literacy assessment developed by Literacy Minnesota which accesses digital literacy and provides modules to improve the digital literacy of individuals. https://www.digitalliteracyassessment.org/

  6. vi.

    Digital skills assessment by Teknimedia which provides materials on computer skills, Microsoft applications and the Internet use which is followed by a test https://www.teknimedia.com/html/digital-skills-assessment.html

  7. vii.

    E & S Online assessment tool provided by the joint effort of OECD and the European Union which looks into improving numeracy, literacy and computer rich skills. https://static1.squarespace.com/static/51bb74b8e4b0139570ddf020/t/52276bd2e4b0ae4ae05ae899/1378315218944/Education+and+Skills+Online.pdf

  8. viii.

    Digital literacy Assessment by Learning.com which improves digital literacy of the students through interactive assessments https://www.learning.com/dla/

From literature, it was noted that the researchers have associated different literacies to digital literacy. However, the remediation interventions or literacy programs that are present on the web do not reflect all the literacies identified by researchers. The interventions are either provided for one module only or a few selected modules such as information literacy, visual literacy or internet literacy. The remediation’s for digital literacy consisted of computer skills, internet ethics and security, Microsoft and windows training. Such interventions lack completeness since the necessary or relevant skills that a digitally literate individual must have are invariably missing. The absence of the essential competencies creates a gap in digital competencies of the 21st-century individuals or netizens.

Therefore, to bridge the existing gap, the research study develops a new digital literacy intervention (DLIP) which encompasses the relevant literacies associated with digital literacy. A comprehensive account of DLIP is provided next section.

2.4 Tests to measure the validity and reliability of instruments developed

The validity of an intervention or instrument determines the accuracy of the measurement, comprehensiveness of the tool in terms of language, technical defects, and whether it is suitable for students' development characteristics (Incebacak & Ersoy, 2017). Tests are often validated by correlating test scores against some outside criteria, which may be scored on tests of accepted validity, successful performance or behavior, or the expert judgment of recognized authorities (Adeleke & Joshua, 2015). The reliability and validity tests for intervention instruments are not provided in the literature; therefore, the paper explored different reliability and validity tests that could be applied to the newly developed DLIP. The most common reliability test that was conducted for interventions or programs developed for the educational setting was the KR-20 test, (Sener & Tas, 2017). The KR-20 test is suitable for determining the reliability coefficient of tests in which each item is parallel to one another, for example, giving one point to the correct answer and zero points to the wrong answer or unanswered question (Berame et al., 2017; Brien et al., 2019; Quaigrain & Arhin, 2017; Sener & Tas, 2017). The KR-20 value greater than 0.7 and closer to 1 means the developed tool is reliable (Brien et al., 2019). The validity of an intervention or instrument determines the accuracy for the measurement, comprehensiveness of the tool in terms of language, technical defects, and whether it is suitable for students' development characteristics (Incebacak & Ersoy, 2017). Tests are often validated by correlating test scores against some outside criteria, which may be scored on tests of accepted validity, successful performance or behaviour, or the expert judgment of recognized authorities (Adeleke & Joshua, 2015). One such correlation is the spearman's correlation test in SPSS (Incebacak & Ersoy, 2017). According to Brien, et al. (2019), a correlation value greater than 0.3 means the intervention is valid.

Furthermore, the effectiveness of digital literacy intervention was also evaluated. There have been many studies conducted where the effectiveness of a study was measured by calculating the effect size (Hansen, 2020; Schippers et al., 2017; Sebastian & Nelms, 2017; White et al., 2020). According to McLeod (2019), statistical significance does not tell if the intervention was effective or not, it is the measure of the effect that describes the effectiveness of an intervention. Effect size has been defined as a quantitative reflection of a magnitude of some intervention that takes place to improve a phenomenon (Albers & Lakens, 2018; Gignac & Szodorai, 2016). Cohen's d is an appropriate effect size for comparing two means (McLeod, 2019). Literature shows that the Cohen’s d value must be greater than 0.2 for an intervention to be effective; otherwise, the difference is trivial, even if it is statistically significant (Hansen, 2020; McLeod, 2019; Schippers et al., 2017; Sebastian & Nelms, 2017; White et al., 2020).

3 Methodology

An exploratory research methodology was used to carry out the study. According to Reiter (2017), exploratory research investigates a problem that has not been thoroughly investigated and is conducted to better understand the existing problem. The current study focuses on the issue of the digital divide in the Pacific Islands, which exists due to the lack of relevant digital literacy skills. Before the development of digilitFj, there was no tool in the Pacific Islands that measured digital literacy of its students and other individuals. Reddy et al. (2020a) designed a digital literacy scale named, digitliFj. The digital literacy scale digilitFj successfully measures the digital literacy competencies of the individuals and reports on individual aggregates of each literacy making up the digital literacy. However, to the best of the authors’ knowledge there is no online remediation or intervention program that improves the digital literacy of individuals. Hence, DLIP was designed to fill this knowledge gap by offering an array of remediation modules to improve specific digital competencies of the individuals.The remediation modules developed maps to the digital literacy framework that has been identified for the study of digital literacy in the Pacific Islands. The authors have designed the remediation content and the quiz questions in the modules as per the relevant skills required in each literacy in the digital literacy framework.

3.1 Development of DLIP

The type of intervention used for the study can be classified as an educational intervention. Educational interventions are used to support students to acquire the skills and knowledge they need to access education (Lim & Shorey, 2019; Verville et al., 2020). With the rapid proliferation of technology into the education system, technology-based educational interventions have become popular (Escueta et al., 2017). Selected examples of technology-based educational interventions include; eReaders and tablets to support early literacy, Interactive radio instruction (IRI), Mobiles for classroom audio and teacher development videos and Mobiles for classroom video. The DLIP is also a technology-based educational intervention that has been developed to support digital literacy of students. It follows a game-based intervention design process as shown in Fig. 1. The theoretical concepts and educational pedagogies are correlated and infused with relevant attributes of gameplay to construct a holistic methodology for developing a game-based intervention. IM in Fig. 1 represents the intervention mapping approach which guides the design, implementation and evaluation of intervention programmes (Arnab & Clarke, 2017).

Fig. 1
figure 1

Game-based intervention design process adopted from (Arnab & Clarke, 2017)

The mentioned design process was adopted from (Arnab & Clarke, 2017) for motivation towards the educational and instructional content.

3.2 Framework for DLIP

Figure 2 shows the framework used for the DLIP. The game-based intervention design process was used as a reference to develop the framework for this study. The steps laid by (Arnab & Clarke, 2017) for a game based intervention was followed and six phases, as given in Fig. 2, were used for its development. The four-dimensional framework for educational pedagogy was also integrated to ensure that the intervention developed was user-friendly.

Fig. 2
figure 2

Framework for digital literacy intervention

The intervention was developed using the Ionic framework v4. Ionic is an open-source framework for developing mobile and progressive web Apps (PWA’s) with features such as a rich library of front-end building blocks and UI components therefore look and feel beautiful on any platform or device also encapsulates technologies like HTML, CSS and Typescript (Govier et al., 2020). The web application composes of three parts; the user interface (UI), the controllers, and API services. The front end of DLIP is developed with ionic UI components and HTML together with CSS for styling. Each page in the web application has a controller where the functions are defined and call the Application Programming Interface (API) service. The web application through the API service calls appropriate REST API functions hosted on the server. These REST API functions are protected using JSON Web Token (JWT) and these functions connect to the database to pull and store data for the web application. The database stores each user's score, points, badges and medals. Ionic helps the web application be responsive, therefore, ensures appropriate running and display on mobile devices.

The REST API functions were developed using the Slim Framework, a PHP micro-framework that helps write simple and lightweight functions for fast execution. The database used is MySQL, which is an open-source relational database. Implementing the REST API will also make it easier to develop an app in the future. Furthermore, this 3-tier design enables great flexibility by allowing updates to a specific independent section of an application. Each module the Web Application delivers is constructed in Articulate's Storyline 360 software. The storyline has powerful controls with easy-to-use interface to help construct and style online courses (Indasari & Budiyanto, 2019; Joss et al., 2019; Wilechansky et al., 2016). Additionally, the storyline ensures that the courses are responsive, therefore, dynamically adapting to different screen sizes of mobile devices. Many researchers have used Articulate's Storyline 360 software to develop interactive courses and modules due to its ease of implementation and usage (Indasari & Budiyanto, 2019). Therefore, to develop the intervention for digital literacy, the Articulate's Storyline 360 software was used. Storyline 360 also includes scripts that help us call the API functions to store the user’s scores. Figure 3 shows the overall architecture of the system.

Fig. 3
figure 3

System Architecture for the intervention

The Three main activities DLIP are login in, completion of the module and retrieving the certificate. Figures 4, 5 and 6 show the sequence diagrams for the three main activities. The three activities are described as follows:

  1. i.

    Login In

    Figure 4 shows how a user logs into the system. The DLIP is connected to digilitFj. A user has to first complete the survey using digilitFj then login into the DLIP using the same email provided before attempting the survey. Once the email is entered, the system checks to see if the survey is completed. If the survey is done them the user is prompted to enter the password and is directed to the homepage. The user has to remember the login details when he next logs into the system.

Fig. 4
figure 4

User login sequence diagram for DLIP

Fig. 5
figure 5

Module completion sequence diagram for DLIP

Fig. 6
figure 6

Sequence diagram for retrieving the certificate fro DLIP

  1. ii.

    Completing the module

    Once the user logs in, he/she is directed to the dashboard of the DLIP, which shows the six modules to be completed. The user has to complete all six modules by attempting the theory content and the quiz. The user has to collect coins in the theory components and the numbers are updated in the database. Once the user attempts the quiz, the points are updated in the database. The points for each module is updated and the overall points for all the modules are calculated and the appropriate awards are set. The user can view the results on the results page.

  2. iii.

    Certification

    Figure 6 shows the sequence diagram for the retrieving of the certificate. The certificate can be retrieved if the user completes all the modules. Once the user has completed all the modules, the user clicks on the certificate tab which prompts the user to enter his/her name. A request for the certificate is made, the system retrieves the appropriate details of the user and a certificate is generated. A copy can also be downloaded for printing.

4 Description of the digital literacy intervention

The framework for digital literacy was adopted from Covello (2010) and digital literacy has been redefined for this study as an individual’s ability to find and evaluate information, use the information effectively, create new content using the information and share and communicate the newly created information using appropriate digital technologies (Reddy et al., 2020a). According to (Covello, 2010; Reddy et al., 2020a), six other literacies confine digital literacy, particularly when evaluating digital literacy in an educational setting.

Figure 7 shows the framework used for the current study. Each literacy definition has been modified to include the current trends in the use of technologies for survival.

Fig. 7
figure 7

Framework for digital literacy

The DLIP consists of six modules which have been defined by Reddy et al., 2020a:

  1. i.

    Information Literacy- using digital technology to find, locate, analyse and synthesise resources, evaluating the credibility of these resources appropriate citation techniques, abiding the legal and ethical issues surrounding the use of these resources and formulating research questions in an accurate, effective and efficient manner.

  2. ii.

    Computer Literacy- an understanding of how to use computers, digital technologies and their applications for practical use.

  3. iii.

    Media Literacy- having the ability to use digital technologies to access, analyse, evaluate and communicate information in a variety of digital platforms.

  4. iv.

    Communication Literacy – using digital technologies to communicate effectively as individuals and work collaboratively in groups, using publishing technologies, the Internet and Web 2.0 tools and technologies.

  5. v.

    Visual Literacy – having the ability to use digital technology to ‘read,’ interpret, and understand the information presented in pictorial or graphic images communicate the information and convert the information into visual representations.

  6. vi.

    Technological Literacy – having the ability to use digital technology to improve learning, productivity and performance.

Each of the six modules is divided into two parts; learning through theory, and then testing the knowledge attained through the module quiz. The learning content covers relevant knowledge and skills on the literacies through theoretical notes which have been adapted from literature and short videos specifically created for each module. The modules are enhanced using gamification to ensure that the students read the theoretical notes, for example, collecting coins while reading the notes. The students are awarded online badges for the coins they collect for each module and the criteria for the badges are listed in Table 2. The module's testing knowledge component consists of a game-based quiz, which is marked out of ten. The ten questions are randomly generated from the question bank for the module. A student has to score the passing mark of five to pass the module and module badges, depending on their score, are awarded at the end of the module. The badge criteria are listed in Table 3. The students are also given a chance to re-attempt the modules to improve their scores for each module.

Table 2 Online badge for the coins
Table 3 Online badge for the module

The design study on digital literacy intervention is a sequel to the study on the digital literacy scale from the work of (Reddy et al., 2020a). The digilitFj measures the digital literacy of individuals according to the levels of the digital literacy scale is given in Table 4. Table 4 shows the digital literacy levels with the associated scores and descriptions.

Table 4 Digital literacy scale with levels, points and description.

The individuals are required to complete a self-reporting survey questionnaire for digital literacy and their digital competencies are measured using the digilitFj. An individual’s digital literacy level is calculated based on the total scores as shown in Table 4. The digilitfj also indicates the remediation module he/she has failed and prompts the user to attempt the module that he/she has failed or scored below seven. A link is provided on the user result page, which will direct the user to the login screen of the DLIP. There are six modules in the DLIP, each with the theoretical content and the self-test quiz. The total score for each quiz is out of ten therefore 60 points in total. Once, the modules are completed the digital literacy competency of the individuals is evaluated using Table 5.

Table 5 DLIP levels and passing requirements

Once an individual attempts the modules in the DLPI, his/her digital literacy level is calculated on his/her module score and the modules he/she has passed. The passing mark for each module is 5 out of 10 or 50%. To decide a module's cut-off score, researchers have mostly used two popular methods, the Angoff method and the Cohen method (Jalil & Mortazhejri, 2012; Yim, 2018). According to Mubuuke et al. (2017) and Assessment Strategies (2014), the Angoff method uses a group of experts to judge how difficult each item is in an exam to determine the cut-off score. The cut-off score (or mark) is like a line in the sand that divides students into two groups; those below the cut-off and those above the cut-off. Below the cut-off may indicate a fail and above the cut-off may indicate a pass (Mubuuke et al., 2017). Cohen's method is establishing a cut-off score by taking 60% of the candidate's score achieved by the 95th percentile. According to Taylor (2012), 60% of the score from top-performing candidates would be more effective than using an arbitrary and it is assumed that the 95th percentile is an accurate representation of the rest of the students’ ability and a good benchmark of the scores. For DLIP, the Cohen method was chosen because the standard-setting score is for a small test and due to the expert panel's unavailability to carry out the Angoff method of score setting.

The calculation for the digital literacy intervention's cut-off score was as follows: the 95th score from the data gathered was 53.86 out of 60. Hence 0.6 * 53.86 = 32.31 (cut-off score). The total points were 60 which is (32/60) = 53%. Therefore the cut-off score or the passing score of the modules was 50%. For the current case, the passing score for each module was 5 out of 10.

The characteristics of the DLIP is summarized in Table 6 which is adopted from (Muirhead et al., 2019) and aligned to the outcomes for this study.

Table 6 Characteristics of the DLIP

The next section of the paper shows the screen captures from one of the modules. The other modules were designed and developed with the same approach.

4.1 Screen captures from information literacy module

4.1.1 The user dashboard

Figure 8 displays the user dashboard for the intervention developed. A user logs in with his/her email and ID and directed to the page as shown in Fig. 3. The user dashboard shows the literacy modules he/she has to attempt with the score from the digital literacy survey. On the dashboard's left-hand side are tabs that the user can click on to his/her progress and achievements using the Badges and Digital literacy Status tab. The user can also click on the Certificate tab to view his/her certificate once all intervention modules are complete. The About tab provides a brief definition of digital literacy and the components of digital literacy.

Fig. 8
figure 8

The user dashboard

4.2 Content for each module

4.2.1 Theoretical phase

Figure 9 shows the content or the theoretical content for each module. For each module, the user has to watch a video created using the video scribe software and then read the relevant concepts associated with the module. The video introduces the participant to the literacy and explains the importance and characteristics. For example, as shown in Fig. 9, one important concept was evaluating information using the Currency, Relevance, Authority, Accuracy, and Purpose (CRAAP) test. Since the modules are infused with gamification, the user can collect coins as shown in Fig. 4 while he/she is doing the readings for the module. There are 10 coins in each module that the user has to collect and depending on the number of coins collected, the user is awarded a coin badge.

Fig. 9
figure 9

Snaps of the content from the information literacy module

4.2.2 Testing phase

Figure 10 shows the testing phase or quiz of the module. The quiz consists of questions relating to literacy and is gamified. Each module has different games and for information literacy module, golf has been used. Once the user completes the quiz, the results page shows his/her score out 10, the number of coins collected for the module and the badges attained for the module. The module badges are awarded according to the criteria set in Table 4.

Fig. 10
figure 10

The Quiz or testing phase of the module

4.3 Reliability and validity tests

The data for the study was collected from 1st-year university students. A total of 126 students’ participated. To test the validity and reliability of the newly designed remediation intervention, the following tests were carried out using the "Statistical Package for the Social Sciences" SPSS software:

  1. i.

    Kaiser–Meyer–Olkin (KMO). According to Dios et al. (2016), a KMO test is done to test the sample adequacy for a set of data, and any value greater than 0.7 and closer to 1 is acceptable.

  2. ii.

    Shapiro–Wilk Normality Test – to test whether the data were normally distributed. If the Sig. value of the Shapiro–Wilk Test is greater than 0.05, the data is normal (Laerd Statistics, 2020).

  3. iii.

    Kuder-Richardson-20 (KR-20)—to test the internal consistency reliability. Any value > 0.7 is acceptable (Sener & Tas, 2017).

  4. iv.

    Spearman’s Correlation test – to test the validity of the scale. Any value > 0.3 is acceptable (Brien et al., 2019; Incebacak & Ersoy, 2017).

  5. v.

    Cohen’s d test – to calculate the effect size. The d = 0.2 be considered a 'small' effect size, 0.5 represents a 'medium' effect size and 0.8 a 'large' effect size (McLeod, 2019).

5 Validity and reliability of the DLIP

  1. i.

    To test the sample adequacy the KMO test was conducted. The KMO value for the study was 0.793 hence the sample used was adequate. The Bartlett’s Test of Sphericity value was 0.00 and according to Yong and Pearce (2013) p-value, less than 0.05 is considered as acceptable.

  2. ii.

    To test the normality of the gathered data, the Shapiro–Wilk Normality Test was carried out. For the p-value from the test was 0.01. Since 0.01 < 0.05, the data was not normally distributed. Therefore, the most suitable correlation test for the study is the spearman's test.

  3. iii.

    To test the internal consistency of digital literacy intervention program, a KR-20 test was conducted. The Cronbach alpha value was 0.869. According to Sener and Tas (2017), Berame et al. (2017) and Quaigrain and Arhin (2017) and any value > 0.7 is acceptable and the closer the value closer to 1, the better it is. Therefore, with the KR-20 value of 0.869, the digital literacy intervention developed is reliable.

  4. iv.

    To test the validity of digital literacy intervention program, spearman's correlation test was performed. The results are shown in Table 7. A correlation value > 0.3 means that the digital literacy intervention is valid (Adeleke & Joshua, 2015; Brien et al., 2019; Incebacak & Ersoy, 2017). As per the results in Table 6, the correlation values are greater than 0.3 and for yielded p-value was less than 0.05. Therefore, the online digital literacy intervention program (DLIP) is valid.

    Table 7  Spearman's correlation value for the six modules in the intervention
  5. v.

    Figure 11 shows the digital literacy scores of the sample before and after attempting the DLIP. The blue series represents scores before the digital literacy intervention program was piloted to the students. The orange series represents scores after the digital literacy intervention program was applied. The graphical representation shows that the participants' digital literacy scores improved after the intervention was piloted to the sample.

Fig. 11
figure 11

Digital Literacy scores of individuals before and after the intervention

  1. vi.

    The descriptive statistic results show that the sample's mean increased from 42.98 to 46.71 after the digital literacy intervention was applied. The Cohen’s d value shown in Table 8 is 0.473 for the overall digital literacy, which is approximately close to 0.5, therefore, it can be stated that DLIP had a moderate impact on the sample. The sig value is 0.00, which is less than 0.05; therefore, the results for Cohen’s d test is statistically significant. Hence, DLIP is an effective program for improving the digital literacy skills of individuals. The Cohen’s d value for each module was also calculated separately and the results are as follows:

    1. i.

      media literacy – the value is 0.51, which means there was a moderate impact of the remediation on the sample.

    2. ii.

      communication literacy- the value is 0.96, which means there was a huge impact of the remediation on the sample.

    3. iii.

      information literacy – the value is 0.08, which means the was not much impact of the remediation on the sample.

    4. iv.

      visual literacy – the value is 0.37, which means the was little impact of the remediation on the sample

    5. v.

      technology literacy – the value is 0.32, which means the was little impact of the remediation on the sample

    6. vi.

      computer literacy- the value is 0.55, which means there was a moderate impact of the remediation on the sample.

Each remediation module developed had some impact on the sample. The lowest impact was for information literacy and the data collected showed that there was not much difference in the scores before and after the intervention. The sample did have information literacy skills and the reason can be that the first-year students are required to do one of the compulsory units for their graduation, which involves information literacy. Also, the USP library offers workshops and training to improve the information literacy of the students.

Table 8 Effect size based on Cohen’s d test

6 Conclusion

In the twenty-first century, digital literacy consists of a dynamic combination of skills that educators need to understand before the students are taught the relevant digital literacy skills.HEI plays an important role in administering digital literacy skills to the future workforce of society. To ensure that the future workforce is digitally literate the digital competencies of the students has to be known and upskilled accordingly. Knowing the digital competencies of the students will assure that the educators design and implement appropriate digital literacy interventions to improve the digital literacy skills of their students. Bringing digital literacy to classrooms or integrating it with the existing curriculum and pedagogies is a global challenge, particularly for developing countries like the Pacific Islands. While the concept of digital literacy in the Pacific Islands is still developing, the advocacy of digital literacy has begun using the theories and frameworks of digital literacy found in the literature. Based on the theories of digital literacy, a digital literacy scale, digilitFj, has been developed which measures the digital competencies of individuals. The digilitFj solves the existing issue of the unknown digital competency of individuals in a society. Using the scale, the digital competency of individuals can be deduced. Since now there is a scale to deduce the digital competency of individuals, it has become easier to know whether an individual is digitally literate or not and what digital skills they are lacking. Therefore, to improve the digital competencies of individuals a remediation program is needed. This research addresses the remediation program which complements digilitFj and has been developed to improve the digital competencies of individuals.

This research provides the remediation on how the digital competencies of individuals can be improved.

The paper introduces a newly developed remediation tool, DLIP, which can be used to improve the digital literacy competencies of individuals. The DLIP has been developed using the digital literacy framework that has been identified by the authors Reddy et al., (2020b) for the digital literacy research in the Pacific Islands. Although the DLIP was developed for Pacific Islands it has universal applicability. The remediation intervention has been built using the Ionic framework, PHP and HTML with CSS for styling. The six modules in the DLIP have been constructed using the Articulate's Storyline 360 software. The validity and reliability tests of DLIP have been carried out and the results show that the remediation tool is valid with a KR-20 value of 0.869 and the Cronbach alpha value ranging > 0.3. Furthermore, the results show that the digital literacy scores of the participants improved after the DLIP was piloted to them. The results showed that the developed intervention was moderately effective with a d value of 0.473 for overall digital literacy. The d value for individual remediation modules also shows some form of impact on the sample with values ranging from 0.08 to 0.32 to 0.5 to 0.96. Hence, the DLIP is significant and valid.

The digilitFj and the DLIP are the first initiatives from the authors for students and other individuals in the Pacific Islands. Organisations can take up the online DLIP intervention program and digilitFj together or separately to address the issue of digital literacy. The authors recommend the educators, institutions and organizationsin the Pacific Islands to utilise these online and free tools to improve the digital literacy of individuals. The education sector, with the approval from the Ministry of Education together with the universities, can integrate the digital literacy initiative to improve the digital competences of the Pacific Islands. Additionally, a collaboration with the NGOs in advocating such initiatives will surely made the much needed improvements as well. The digital divide issue still exists in the Pacific Islands and being digitally literate is one of the solutions to bridge the gap and create a digital culture in the Pacific Islands. The DLIP and digilitFj can also be available as an open educational resource promoting open education and presenting Pacific culture through gamification of the modules. All-in-all, having the necessary digital skills will enable one to contribute effectively to their economy and contribute towards the development of the Pacific Islands in the digital age.