Reliability Reflection

Reliability Reflection

by HUF04 Bùi Bích Thủy -

I have not calculated Cronbach’s Alpha. Because of that, I cannot interpret a reliability coefficient at this stage.

Still, I feel moderately confident about my scale’s p...

more...

I have not calculated Cronbach’s Alpha. Because of that, I cannot interpret a reliability coefficient at this stage.

Still, I feel moderately confident about my scale’s potential reliability because I tried to write items that are clearly connected to the same main constructs: Human–AI collaboration and self-regulated learning in academic writing.

To promote internal consistency, I used items that focus on related behaviors, such as idea generation, revision, monitoring, evaluation, and critical use of AI. I also tried to keep the wording simple, specific, and consistent in tone so respondents would interpret the items in similar ways. In addition, most items were written as single-idea statements rather than combining multiple ideas in one question.

One challenge I faced was making sure the items were similar enough to measure the same construct, but not so repetitive that they sounded identical. Another difficulty was avoiding overlap between AI use and self-regulated learning, because these two areas are closely related in my topic. In the next draft, I would improve the scale by checking whether some items are redundant, revising any vague wording, and piloting the questionnaire with real responses before calculating Cronbach’s Alpha.

Reliability Reflection

by HUF04 Nguyễn Đăng Hải -
I think you’ve done a good job reflecting on your scale, especially in how you tried to balance consistency and avoid repetition. Your point about keeping items focused on ...

more...

I think you’ve done a good job reflecting on your scale, especially in how you tried to balance consistency and avoid repetition. Your point about keeping items focused on single ideas is really important for improving clarity and reliability.

I’m curious though—since you mentioned the overlap between Human–AI collaboration and self-regulated learning, how are you planning to clearly separate these two constructs when you analyze the data later? For example, would you consider grouping items into different subscales or using factor analysis to check whether they actually measure distinct dimensions?

Also, before calculating Cronbach’s Alpha, do you plan to pilot your questionnaire with a small sample? I feel like that step could really help you identify which items might be confusing or redundant before moving to the reliability test.