From Research Problem to Research Question

Turning research problems into clear, feasible questions

 

This set of steps helps you transform a broad research problem into a clear, focused, and feasible research question. You’ll brainstorm multiple options, refine them for clarity and significance, evaluate what data and resources are needed, and finally select the question(s) that best fit(s) both your problem and your practical constraints.

Step 1: Brainstorm Possible Research Questions

This prompt helps turn a research problem into methodologically sound research questions across qualitative, quantitative, and—if justified—mixed-methods formats.

The core goal is question formulation—to suggest questions that are:

  • Clearly stated and answerable,
  • Aligned with appropriate data types and methods,
  • Tied directly to the core problem, population, and context.

Each question is written in line with best-practice wording conventions (e.g., “How do…” for qualitative; “To what extent…” for quantitative) and avoids pitfalls like vagueness, causality without design, or yes/no framing. Where relevant, brief notes on design and analysis are included—not to prescribe a full plan, but to help assess feasibility and scope.

Click the button below to copy the prompt and paste it into your preferred AI platform:

Copy prompt
Sample Result from ChatGPT-4o

Please paste your research problem statement (1–3 sentences, including population/context).

Despite the growing integration of AI tools in academic research, ethical perspectives on their use remain unevenly understood across disciplines, cultural backgrounds, and institutional settings. Daoudi (2025) identifies a critical gap in the literature, noting that current empirical evidence is overly reliant on self-reported data from homogenous academic populations and lacks triangulation with other methods. The author emphasizes the need to investigate how disciplinary norms, cultural contexts, and institutional structures shape ethical attitudes toward AI in publishing and peer review. This underexplored area calls for methodologically diverse studies that include underrepresented academic fields and international contexts, as well as document or observational data to validate self-perceptions and mitigate response bias. Addressing this gap would provide a more comprehensive understanding of the socio-ethical dynamics influencing AI acceptance in scholarly environments (Daoudi, 2025).

Thank you — this is a strong and well-specified problem statement with a clear rationale for empirical inquiry.

Here’s your full set of research questions and methodological guidance:

Qualitative (1–3 questions)
RQ1: How do scholars from diverse disciplinary and cultural backgrounds interpret ethical concerns surrounding the use of AI in academic publishing and peer review?
RQ2: In what ways do institutional policies and structures influence academics’ ethical decision-making regarding AI tool adoption in research and review processes?
RQ3: What tensions or dilemmas do researchers experience when navigating the ethics of AI use in scholarly communication?
Suggested approach, data & analysis:
A constructivist grounded theory or interpretive phenomenological analysis (IPA) approach is appropriate, as the focus is on subjective meanings and lived ethical experiences. Semi-structured interviews and focus groups would allow deep exploration across contexts (e.g., countries, disciplines, institutions). Purposive and maximum variation sampling can ensure diversity; sample size might range from 20 to 40 participants. Thematic or grounded coding would support inductive analysis across cultural and institutional dimensions.

Quantitative (1–3 questions)
RQ1: To what extent do disciplinary affiliation, cultural background, and institutional policy awareness predict ethical attitudes toward AI use in academic publishing among international scholars?
RQ2: What is the association between perceived institutional support for research integrity and scholars’ acceptance of AI tools in peer review?
RQ3: Are there significant differences in ethical concern levels regarding AI use across academic disciplines and world regions?

Suggested approach, data & analysis:
A cross-sectional survey design is appropriate. Independent variables might include discipline (categorical), cultural background (region or standardized scales such as Hofstede dimensions), and institutional integrity support (measured via Likert scale). Dependent variables could be attitudes toward AI use, perceived ethical risk, and behavioral intentions. Use validated ethics or AI attitude scales where possible. Data analysis may involve multiple regression, ANOVA, or MANOVA. Sample size should be large enough to support subgroup analysis (e.g., N ≈ 200–400+, depending on comparisons and effect sizes).

Mixed-Methods (include only if relevant)
Mixed methods are genuinely warranted. The complexity of ethical orientations, shaped by layered cultural and disciplinary norms, requires both measurable patterns and nuanced, context-specific understanding.
RQ1 (Quant): What patterns exist in ethical attitudes toward AI in publishing across academic disciplines, regions, and institutional types?
RQ2 (Qual): How do researchers from underrepresented disciplines and regions describe their ethical reasoning regarding AI use in publishing and peer review?
RQ3 (Integration): How do qualitative insights about researchers’ ethical reasoning help explain or expand on patterns found in the quantitative strand?
Suggested approach, data & analysis:
An explanatory sequential design is recommended: start with a survey to identify patterns (see Quant guidance), then follow up with interviews to explore unexpected or nuanced findings (e.g., outlier groups, ambiguous attitudes). Integration occurs at the interpretation phase, enriching survey findings with context and meaning. Sampling for qual can be drawn from survey participants who consent to follow-up, ensuring direct connection between strands.
Quality check
✅ All questions are method-aligned, specify population/context, and avoid causal/yes-no phrasing.
🔍 Clarifications that would improve precision:
Define “AI tools” scope (e.g., generative AI, automated review assistants?).
Clarify which academic fields are considered “underrepresented.”
Indicate whether focus is on researchers, reviewers, or both.