Home About us Editorial board Ahead of print Current issue Archives Submit article Guidelines Contacts Login 
ISSN: Print -2349-0977, Online - 2349-4387


 
 Table of Contents  
MEDICAL EDUCATION: EVOLVING METHODOLOGIES
Year : 2014  |  Volume : 1  |  Issue : 3  |  Page : 222-227

Development of clinical skills in ophthalmology: Significance of objective structured clinical examinations


1 Department of Ophthalmology, Mahatma Gandhi Institute of Medical Sciences (MGIMS), Sevagram, Wardha, India
2 Department of Ophthalmology, Government Medical College, Nagpur, Maharashtra, India

Date of Web Publication27-May-2015

Correspondence Address:
Smita Singh
Department of Ophthalmology, Mahatma Gandhi Institute of Medical Sciences, Sevagram, Wardha, Maharashtra
India
Login to access the Email id

Source of Support: None, Conflict of Interest: None


DOI: 10.4103/2349-0977.157767

Rights and Permissions
  Abstract 

Introduction: Currently, skill assessment methods for medical students in formative assessment are inconsistent. Our objective was to: (1) Develop an Objective Structured Clinical Examination (OSCE) assessing performance of basic clinical ophthalmic examination (2) remedy deficiencies in knowledge, skill. Materials and Methods: Objective Structured Clinical Examination stations and checklists were developed and validated following approval of institutional ethics committee. Postgraduate student volunteers served as both simulated patients and scorers; however, one station (eye drop instillation) used a mannequin with a faculty member as an observer, scorer. Third MBBS students (n = 61) were oriented. A four station pilot examination was done. After Posting#1 spanning 2 weeks, an eight-station OSCE (7 min. duration each) assessed examination of visual acuity, color vision, ocular motility, pupillary reaction; anterior chamber depth measurement; confrontation, digital tonometry, and eyedrop instillation. Common deficiencies identified were addressed through an interactive demonstration. Six weeks later, following Posting#2, the same test was repeated. Aggregated and paired scores were compared using Student's t-test. Feedback was obtained from students, simulated patients, and faculty. Results: Mean first session score was 22.4 ± 4.66 over 40 (56%); highest being for visual acuity; lowest for eyedrop instillation. Mean score in the second was 30.2 ± 4.3 (75.5%). Paired t-test showed t = 13.73 (P < 0.0001). 77.2% students preferred change in assessment methodology; 100% voting for OSCEs. Conclusions: Feed-back from initial assessment followed by the additional focused teaching session improved students' clinical skills. All were unanimously convinced of the need to change current assessment system. OSCEs are a simple, effective way to both learn and assess clinical examination skills.

Keywords: Clinical skills, educational assessment, feedback, medical students, qualitative evaluation


How to cite this article:
Singh S, Shukla A K, Hingorani-Bang P, Bonde S. Development of clinical skills in ophthalmology: Significance of objective structured clinical examinations. Astrocyte 2014;1:222-7

How to cite this URL:
Singh S, Shukla A K, Hingorani-Bang P, Bonde S. Development of clinical skills in ophthalmology: Significance of objective structured clinical examinations. Astrocyte [serial online] 2014 [cited 2020 Jul 16];1:222-7. Available from: http://www.astrocyte.in/text.asp?2014/1/3/222/157767


  Introduction Top


There is a constant need to improve clinical skill transfer in medical education. Various methods are used currently to assess the skills thus acquired. While the oral examination or the viva is largely unstructured and not necessarily patient-based, the long case method, though effective, is case specific and examiner-dependent [1] and has a very limited scope as the content is limited to one or two patient problems as also, the interaction with the patient is not observed. [2] Multiple short cases, while increasing the number of patient problems and hence, the reliability, could be extremely time-consuming. The objective structured clinical examination (OSCE), first described and developed by Harden et al. (1975), tries to overcome these shortcomings by making it easy to control the variables and complexity of examination, clearly defining the aims of examination and testing more of the student's knowledge also doubling up as a feedback tool for the students and the staff. [3] It provides flexibility in terms of student and examiner numbers, type of patients and examination formatting, length, duration, and extent. [4]

Objective Structured Clinical Examination as a method for formative/summative assessment and in some cases, even as a teaching tool [5] has been evaluated for feasibility, reliability, validity, acceptability, and comparability with conventional methods. [6],[7] Studies for both undergraduate and postgraduate students in different contexts and situations in many countries for assessing various intrinsic aspects like communication, professionalism, advocacy, management, knowledge [8] have been carried out.

In the field of Ophthalmology too, studies have been conducted to validate procedural proficiency using OSCE's; [9] evaluate its practicality, reliability, validity, and acceptance [10],[11] and even as a tool for evaluating other procedures like peer teaching. [12] One study conducted by Bhatnagar et al. (2011) in India shows that OSCE as an assessment tool in clinical skills training in ophthalmology for undergraduates has high validity and reliability. [13]

In this study, by focusing on basic, "must-know" clinical skills we wished to evaluate the OSCE as an assessment tool for knowledge, skills, attitude and behavior; a learning tool promoting self-assessment among students; a feedback tool for teachers; and finally as an evaluation tool for the new Conventional teaching-cum-OSCE-cum-Corrective teaching model. We also sought to evaluate the acceptability of this method among the undergraduate students, the postgraduates and the faculty. We expected this method to be both effective and acceptable.


  Materials and Methods Top


Subjects

Setting


The study was performed in the setting of a 780 bedded, rural-based, tertiary care teaching hospital after obtaining clearance from the Institutional Ethics Committee.

Selection Inclusion criteria

  • Undergraduate sixth and seventh semester medical students (Students of 2008 batch of MBBS), divided into three batches having successive clinical postings in Ophthalmology.
  • Students attending both postings of 15 days each during the study period.


Exclusion criteria

  1. Students of 2008 batch who had failed the preclinical and para-clinical terms : The referred batch of 2008
  2. Those students who were absent for any one of the two postings during the study period.


Sample size

All the students of the batch of 2008 who fulfilled the inclusion and exclusion criteria were enrolled. Sample size totaled 61.

Study design

  • Prospective pre-post Comparison Study (of OSCE scores) before and after corrective feedback and intervention (in the form of deficiency-directed focused teaching) with
  • Qualitative evaluation study (of the feedback obtained at the end).


ToolsObjective Structured Clinical Examination was the comparative tool chosen

  • The Maharashtra University of Health Sciences recommendations for "must-know" areas of skill training were targeted.
  • Ophthalmic clinical skill-based diagnostic OSCEs were designed.
  • Institutional Ethics Committee approval was obtained.
  • Eight OSCE stations were developed and set up in the cubicles of the Out-Patient Department after OPD hours [Table 1].
    Table 1: OSCE Stations


    Click here to view
  • Each station was allotted 7 min.
  • Every station carried a maximum score of five marks [Table 2].
    Table 2: A Sample Checklist for the Observer


    Click here to view
    • The break-up of marks varied from station to station.
    • Every station was marked giving due consideration to knowledge, skill, attitude, and behavior.
    • 2 marks were allotted for the correct method of performing the skill and the result obtained.
    • The remaining 3 marks were divided into six areas allotting half mark each for peri-procedure cleanliness, procedure-relevant handling of subjects/instruments, and communication with the patient. This included giving appropriate and adequate instructions, talking politely, managing an uncooperative patient, encouraging or reassuring the patient where necessary.
    • The first seven stations used simulated patients comprising of postgraduate students of the department who had opted in as volunteers. They were given a virtual patient profile for simulation and were trained in scoring; hence doubling up as scorers too.
    • The last station (assessing eyedrop instillation) used a mannequin. This station was observed and scored by a faculty member.




A focused teaching session was the second (interventional) tool

About which the pre and posttests were based. It was aimed at correction of deficiencies found from the results of the OSCE Test#1 and was hence, dynamic in its contents.

It was in the form of an interactive demonstration and was conducted by the same faculty member for all three batches.

Feedback form was the qualitative tool used

It was distributed to:

  1. Students undergoing the OSCEs (n = 61).
  2. Postgraduate students who performed the role of both, simulated patients and scorers (n = 11).
  3. Departmental faculty (n = 6) who helped with devising, setting up, and supervising the OSCEs as well as scoring the observed station. Moreover, the faculty was responsible for teaching the skills which were being evaluated in the first place (by conventional method) as also for conducting the interactive corrective session.


Procedure

Administering the Objective Structured Clinical Examinations [Figure 1].


The students (n = 61) were divided into three batches (A, B, C) of 19, 22, and 20 each. Each batch underwent an orientation at the start of their 2 weeks' posting, namely Posting#1. This was followed immediately by a Pilot four-station OSCE examination on day one of Posting#1. At the end of Posting#1, the eight-station OSCE was administered with scoring; this being labeled Test#1. The data was recorded and saved. The eight OSCEs were evaluated for any trends in scoring. Interventional tool was then administered (see below). At the end of 6 weeks, the first cycle came to an end. The same batches then had repeat postings of 2 weeks' duration, namely Posting#2. The same set of OSCEs was repeated following Posting#2; this was Test#2. Hence, every batch underwent Test#2 6 weeks following Test#1 and 8 weeks following the pilot test. The paired data were then analyzed by statistical methods as described below.
Figure 1: OSCE in progress.

Click here to view


Conducting a deficiency-directed teaching session

This followed the scoring of Test#1 at the end of Posting#1, 2 weeks after the Pilot exam and 6 weeks prior to Test#2. The trends in scoring were observed, and the areas of inadequacies were targeted in the form of demonstration and student interaction. The content of the session was modified and tailor-made to suit the deficiencies of the respective batch. The principal Investigator conducted this session.

Administering the feedback form


The students, postgraduates, and the faculty were asked to mark a feedback form with: Yes, No or Not sure [Table 3] and [Table 4].
Table 3: Feedback Form for Undergraduates


Click here to view
Table 4: Feedback Form for Postgraduates and Faculty


Click here to view


Reflections

The stakeholders, namely the students, the postgraduate students and the faculty were encouraged to ponder on the experience, and their reflections were recorded.

Data analysis

  • Students' scores of Test#1 were averaged. The mean of each OSCE station was calculated separately (out of 5) for all students as were the total scores (out of 40) of the eight OSCEs. The OSCE-wise trends were noted for every batch and then, for the entire class. These trends were then utilized for conducting the corrective session.
  • Based on the ascending aggregate scores of Test#1, the students were divided into three equal groups of 20, 20, and 21 students each. These groups were classified for the purpose of further analysis as Low achievers, Middle Achievers, and the High Achievers in that order. The means of each of these groups were further calculated as were the percentages.
  • Following Test#2, the group-wise means were calculated again and then, the class average was computed. Further, the percentages were found.
  • With the two sets of data hence available, statistical tests of comparison in the form of the Student's paired t-test was used to find by determine and then, the statistical significance of the difference between the two sets of scores was found in the form of P value.
  • The feedback data were also analyzed as per the responses given. These were categorized as "Yes," "No," or "not sure."



  Results Top


Quantitative

The mean scores of the entire batch for Test#1 were found to be 22.4 ± 4.66 out of a total of 40 which signifies a result of 56%. The mean scores of all 61 students for Test#2 were found to be 30.2 ± 4.3 out of a maximum of 40 which was computed to 75.5%. Hence, a difference of 7.8 marks out of 40 or 19.5% was found between the results of Test#1 and Test#2 which followed feedback and corrective lecture. A paired t-test was performed, and the value of t was found to be t = 13.73. The P value for statistical significance was calculated to be P < 0.0001. This signified that the occurrence of chance was less than 0.001% and that it could be said that the intervention and feedback benefited the students with a 99.999% confidence limit. These values were found to be highly significant.

Moreover, it was found that the low achievers were the group which benefited the maximum. Their scores increased from an average of 14.5 out of 40 (36.25%) in Test#1 to an average of 25.3 out of 40 (63.25%) in Test#2. This meant that they showed a difference of 10.8 marks or 27% which was significantly higher than the rise in class average.

The middle achievers were found to have a Test#1 score average of 21.9 out of 40 (54.75%) and a Test#2 average of 29.8 out of 40 (78.5%) which meant a rise of 7.9 marks out of forty or a percentage of 19.75% which was almost corresponding with the average class rise.

The high achievers showed the smallest rise in percentage between the two test scores among the three groups, that is, 11.75% corresponding to a difference of 4.7 marks out of 40 with a Test#1 score average of 30.4 out of 40 (76%) and a Test#2 score average of 35.1 out of 40 (87.75%) [Figure 2].
Figure 2: Comparative Scores in the 3 Groups.

Click here to view


Qualitative

The feedback forms were examined, and the options marked were recorded. About 79.2% of undergraduates, 72.7% of postgraduates, and 86.6% of the faculty felt that the current system of assessment needed to change. About 88.7% of undergraduates, 100% of postgraduates, and 33% of the faculty felt that the OSCEs were better as compared to the conventional system in current use. The other answers also reflected that the undergraduates were of the opinion that the experimental system of assessment changed their attitudes and helped them develop their skills in addition to strengthening their knowledge. Faculty was of the opinion that the proposed changes will benefit the students significantly and that the skill transfer would improve tremendously were this system to come into practice. The postgraduates were convinced that OSCEs were better and were the assessment technique of the future in terms of building knowledge, attitude, and skills.

The reflections of the three groups were recorded. The recurring words used by undergraduates were "fair," "enjoyable," "greater understanding," "practical experience," and "better concepts". The commonly used words by the postgraduates were "effective," "just," "simpler," and "regret" that they had not had a similar assessment technique themselves. The faculty felt that this was "objective," "fair," "exhaustive" but was "difficult to arrange."


  Discussion Top


In our research, we found that after the conventional clinical posting for undergraduate medical students undergoing clerkship in Ophthalmology, the introduction of OSCE followed later by remedial teaching proved to be more effective in improving clinical skill transfer as compared to conventional teaching alone. OSCEs were found to be well-accepted by the students and faculty, and they unanimously looked forward to future assessments including OSCEs.

This meant that conventional teaching alone may not achieve the desired results in terms of clinical skill transfer. A need for compounding or mixing of methods was seen. The improvement is especially evident in the low achievers among the students. Besides, the students and faculty seemed ready and motivated to see a change in the current methods. This could be a step towards active participation of students in the learning process.

Reviewing the existing literature also shows some other studies where the OSCE process serves to identify areas of weakness in the curriculum and/or teaching methods, and thus can serve as a mechanism to improve educational effectiveness. [14]

Students' OSCE scores were found to be significantly reproducible and correlated with each other in contrast to faculty clinical evaluation which were skewed particularly in assessing performance that differs substantially from the mean. [15],[16]

Like us, Brazeau et al. also found it useful to use the OSCE as a teaching tool to be used by faculty. In addition, they also used it for peer-feedback. [5]

Like in our study, other studies show that students preferred OSCE as method of assessing clinical competence considering it more valid and reliable. [7],[17]

Our students agreed with those of other studies that OSCE helped to enhance communication skills, [17] was comprehensive, fair and transparent. [18],[19] Furthermore, as in our study, the OSCE helped the students in self-feedback or identifying their weak areas. [19],[20]

However, unlike our findings, there are reports of OSCE-related stress among students [18],[20],[21] due to inadequate time, newness of assessment format, and vague instructions.

Our study also had some limitations. The test was performed on a single batch of medical students. It used a simulated atmosphere as opposed to actual patients. There was a possibility of recall as the test was repeated in 6 weeks and did not test long-term learning or behavior change. Besides, it may be difficult to generalize these skills in the stresses of the actual workplace.

We also encountered some difficulties as the procedure was time-consuming, required meticulous planning, needed adequate space, the co-operation of postgraduate students with faculty, their simultaneous release from patient-care related duties as well as concurrent availability with the undergraduate students.

Further research is needed to assess the long-term effects of this new compound-teaching model by long-term follow-up of these students. The study should include more students and successive batches for reliability testing. Finally, this model should be evaluated by workplace/community assessment and finally by statistical correlations with the status of eye-health of the population of that area.


  Conclusion Top


The OSCE has proved to be an acceptable and useful tool first, for assessing clinical skills; second, as a good self-learning tool for the students; third, a feedback tool to the teachers, based on which, a remedial teaching session was designed and conducted; and fourth, as an evaluation tool of this modified, compound system of assessment.

 
  References Top

1.
Norcini JJ. The death of the long case? BMJ 2002;324:408-9.  Back to cited text no. 1
    
2.
Smee S. Skill based assessment. BMJ 2003;326:703-6.  Back to cited text no. 2
[PUBMED]    
3.
Harden RM, Stevenson M, Downie WW, Wilson GM. Assessment of clinical competence using objective structured examination. Br Med J 1975;1:447-51.  Back to cited text no. 3
[PUBMED]    
4.
Patrício MF, Julião M, Fareleira F, Carneiro AV. Is the OSCE a feasible tool to assess competencies in undergraduate medical education? Med Teach 2013;35:503-14.  Back to cited text no. 4
    
5.
Brazeau C, Boyd L, Crosson J. Changing an existing OSCE to a teaching tool : The making of a teaching OSCE. Acad Med 2002;77:932.  Back to cited text no. 5
    
6.
Dong T, Swygert KA, Durning SJ, Saguil A, Gilliland WR, Cruess D, et al. Validity evidence for medical school OSCEs : Associations With USMLE(®) Step Assessments. Teach Learn Med 2014;26:379-86.  Back to cited text no. 6
    
7.
Ameh N, Abdul MA, Adesiyun GA, Avidime S. Objective structured clinical examination vs traditional clinical examination : A0 n evaluation of students' perception and preference in a Nigerian medical school. Niger Med J 2014;55:310-3.  Back to cited text no. 7
[PUBMED]  Medknow Journal  
8.
Dwyer T, Glover Takahashi S, Kennedy Hynes M, Herold J, Wasserstein D, Nousiainen M, et al. How to assess communication, professionalism, collaboration and the other intrinsic CanMEDS roles in orthopedic residents : Use of an objective structured clinical examination (OSCE). Can J Surg 2014;57:230-6.  Back to cited text no. 8
    
9.
Haque R, Abouammoh MA, Sharma S. Validation of the Queen's university ophthalmoscopy objective structured clinical examination checklist to predict direct ophthalmoscopy proficiency. Can J Ophthalmol 2012;47:484-8.  Back to cited text no. 9
    
10.
Kampmeier J, Rau T, Liebhardt H, Fegert JM, Lang GK. Adoption of OSCE examination procedures in ophthalmology. Klin Monbl Augenheilkd 2011;228:550-4.  Back to cited text no. 10
    
11.
Aydin P, Gunalp I, Hasanreisoglu B, Unal M, Erol Turacli M. A pilot study of the use of objective structural clinical examinations for the assessment of ophthalmology education. Eur J Ophthalmol 2006;16:595-603.  Back to cited text no. 11
    
12.
Simmenroth-Nayda A, Görlich Y, Wagner M, Müther M, Lohse C, Utte L, et al. Undergraduate teaching in ophthalmology. Do standardized practical examinations make sense?. Ophthalmologe 2014;111:235-40.  Back to cited text no. 12
    
13.
Bhatnagar KR, Saoji VA, Banerjee AA. Objective structured clinical examination for undergraduates : Is it a feasible approach to standardized assessment in India? Indian J Ophthalmol 2011;59:211-4.  Back to cited text no. 13
[PUBMED]  Medknow Journal  
14.
Tervo RC, Dimitrievich E, Trujillo AL, Whittle K, Redinius P, Wellman L. The objective structured clinical examination (OSCE) in the clinical clerkship : An overview. S D J Med 1997;50:153-6.  Back to cited text no. 14
    
15.
Lukas RV, Adesoye T, Smith S, Blood A, Brorson JR. Student assessment by objective structured examination in a neurology clerkship. Neurology 2012;79:681-5.  Back to cited text no. 15
    
16.
Prislin MD, Fitzpatrick CF, Lie D, Giglio M, Radecki S, Lewis E. Use of an objective structured clinical examination in evaluating student performance. Fam Med 1998;30:338-44.  Back to cited text no. 16
    
17.
Nasir AA, Yusuf AS, Abdur-Rahman LO, Babalola OM, Adeyeye AA, Popoola AA, et al. Medical students' perception of objective structured clinical examination : A feedback for process improvement. J Surg Educ 2014;71:701-6.  Back to cited text no. 17
    
18.
Pierre RB, Wierenga A, Barton M, Branday JM, Christie CD. Student evaluation of an OSCE in paediatrics at the university of the West Indies, Jamaica. BMC Med Educ 2004;4:22.  Back to cited text no. 18
    
19.
Awaisu A, Mohamed MH, Al-Efan QA. Perception of pharmacy students in Malaysia on the use of objective structured clinical examinations to evaluate competence. Am J Pharm Educ 2007;71:118.  Back to cited text no. 19
    
20.
Siddiqui FG. Final year MBBS students' perception for observed structured clinical examination. J Coll Physicians Surg Pak 2013;23:20-4.  Back to cited text no. 20
    
21.
Hemingway S, Stephenson J, Roberts B, McCann T. Mental health and learning disability nursing students' perceptions of the usefulness of the objective structured clinical examination to assess their competence in medicine administration. Int J Ment Health Nurs 2014;23:364-73.  Back to cited text no. 21
    


    Figures

  [Figure 1], [Figure 2]
 
 
    Tables

  [Table 1], [Table 2], [Table 3], [Table 4]



 

Top
 
 
  Search
 
Similar in PUBMED
   Search Pubmed for
   Search in Google Scholar for
 Related articles
Access Statistics
Email Alert *
Add to My List *
* Registration required (free)

 
  In this article
Abstract
Introduction
Materials and Me...
Results
Discussion
Conclusion
References
Article Figures
Article Tables

 Article Access Statistics
    Viewed1339    
    Printed40    
    Emailed0    
    PDF Downloaded176    
    Comments [Add]    

Recommend this journal


[TAG2]
[TAG3]
[TAG4]