簡易檢索 / 詳目顯示

研究生: 楊書銘
Yang, Shu-Ming
論文名稱: 人工智慧視訊面試透明度與科技信任度之研究
The Impact of Artificial Intelligence Video Interview with Transparency on Job Applicants' Trust in Technology
指導教授: 孫弘岳
Suen, Hung-Yue
口試委員: 陳怡靜
Chen, Yi-Ching
陳建丞
Chen, Chien-Cheng
孫弘岳
Suen, Hung-Yue
口試日期: 2022/06/30
學位類別: 碩士
Master
系所名稱: 科技應用與人力資源發展學系人力資源發展碩士在職專班
Department of Technology Application and Human Resource Development_Continuing Education Master's Program of Human Resource Development
論文出版年: 2022
畢業學年度: 110
語文別: 中文
論文頁數: 64
中文關鍵詞: 人工智慧視訊面試透明度科技信任度人機互動
英文關鍵詞: AI video interview, Transparency, Trust in AI, Human-Computer Interface (HCI)
研究方法: 準實驗設計法半結構式訪談法田野調查法
DOI URL: http://doi.org/10.6345/NTNU202201396
論文種類: 學術論文
相關次數: 點閱:166下載:41
分享至:
查詢本校圖書館目錄 查詢臺灣博碩士論文知識加值系統 勘誤回報
  • 在後疫情時代,因防疫而減少實體接觸是普遍共識,但在市場劇變之時,企業比以往更渴求人才。該如何透過科技工具來滿足防疫與求才需求,若只為了防疫,視訊面試足以取代實體接觸之需求,但真人面試官能負荷的量能有限,企業如何透過科技工具,在這場人才爭奪戰中取得先機,提升招募效率,人工智慧視訊面試成為越來越多企業的選擇。但對求職者來說,人工智慧視訊面試多採取錄影面試的方式進行,除了缺乏與人的互動,更因對人工智慧評估與決策過程的不確定感,對其的科技信任亦不高,最終降低求職者使用視訊面試的意願。其中,透明度常被認為是人工智慧與現實世界連結的關鍵,當求職者不清楚人工智慧的評估過程時,會擔憂其面試表現是否能正確反映自身真實潛力,進而減少對人工智慧評估的信任度;故當求職者缺乏對人工智慧的信任時,可能會拒絕參與面試或面試表現劣於其真實水平。
      本研究分為二個階段,第一階段為前導測試,邀請參與者進行模擬錄影面試,並藉由問卷以及非結構訪談收斂出最具透明度之呈現方式;第二階段正式實驗,邀請 73 位求職者,在進行人工智慧錄影面試後填寫科技信任量表,並進行統計分析。本研究發現具透明度的人工智慧視訊面試對情感科技信任度有正向影響,研究結果亦可提供人工智慧錄影面試系統供應商與雇主,開發與挑選更有效的AI面試介面。

    In the Post-COVID-19 pandemic era, it is a consensus to reduce physical contact due to epidemic prevention, but companies are more eager for talent than ever, when the market quickly pursues recovery. How to use technological tools to deal with the needs of epidemic prevention and talent seeking? If it is only for epidemic prevention, video interviews are enough to replace the need for physical contact, but it’s not good enough if companies compete for talents with other companies. Artificial Intelligence (AI) video interviews have become the choice of more and more companies. For job seekers, AI video interviews have no interaction with people, and the uncertainty about the AI interviewers and lack of technological trust in the AI interview could reduce job seekers’ willingness to use. For enterprises, it may miss the entry of key talents.
      Transparency is often considered to be the key to connecting AI with the real world. When job seekers are unclear about the evaluation process of AI, they are often worried about whether its performance can correctly reflect future job potential, thereby reducing their trust in AI. When job seekers lack trust in AI, they may simply refuse to participate in interviews or their interview performance is worse than their real level. The purpose of this study is to investigate whether AI video interviews with transparency have an impact on trust in AI? Is it possible to increase the transparency of AI-based video interviews, thereby increasing the AI trust of job seekers in the tool?
      This research will be divided into two stages. The first stage is a pilot test. Participants will be invited to conduct supposed video interviews, and the most effective presentation of transparency will be converged through questionnaires and unstructured interviews. And in the second stage, 60 participants were invited to fill in the technology trust scale after the AI video interview. This study expects that AI video interviews with transparency have a positive impact on AI trust. The results of the study can provide AI video interview system suppliers and employer to develop and choice more effective interface embedded with AI video interview .

    第一章 緒論 1 第一節 研究背景與動機 1 第二節 研究目的 5 第三節 研究範圍 6 第二章 文獻探討 7 第一節 人工智慧透明度 7 第二節 科技信任 9 第三節 人工智慧視訊面試透明度與科技信任 11 第三章 研究設計 15 第一節 研究架構與假設 15 第二節 研究對象與流程 16 第三節 研究工具 24 第四章 資料分析 29 第一節 前導研究受訪對象 29 第二節 前導研究透明度文字說明 31 第三節 前導研究認知科技信任 32 第四節 前導研究情感科技信任 34 第五節 正式研究敘述性統計 (DESCRIPTIVE STATISTICS ANALYSIS) 37 第六節 因素分析  (FACTOR ANALYSIS) 39 第七節 信度分析(RELIABILITY ANALYSIS) 41 第八節 相關分析(CORRELATION ANALYSIS) 43 第九節 共變數分析 (ANALYSIS OF COVARIANCE, ANCOVA) 45 第十節 多變量共變數分析 (MULTIVARIATE ANALYSIS OF COVARIANCE, MANCOVA)     47 第五章 研究討論與建議 49 第一節 研究結果與發現 49 第二節 研究結果與現有理論的探討 51 第三節 研究結果對實務的建議 53 第四節 研究限制與未來研究建議 54 第五節 結論 55 參考文獻 56 附錄 62 附錄一 知情同意書 63

    陳怡靜、錢國倫(2015)。從功能與象徵性架構探 討雇主形象對組織人才吸引力的影響。中山管理評論,23(4), 1125-1154。http://dx.doi.org/10.5297%2fser.1201.002
    Acikgoz, Y., Davison, K. H., Compagnone, M., & Laske, M. (2020). Justice perceptions of artificial intelligence in selection. International Journal of Selection and Assessment, 28(4), 399-416. https://doi.org/10.1111/ijsa.12306
    Albu, O. B., & Flyverbom, M. (2019). Organizational Transparency: Conceptualizations, Conditions, and Consequences. Business & Society, 58(2), 268–297. https://doi.org/10.1177/0007650316659851
    Asan, O., Bayrak, A. E., & Choudhury, A. (2020). Artificial Intelligence and Human Trust in Healthcare: Focus on Clinicians. Journal of medical Internet research, 22(6), e15154. https://doi.org/10.2196/15154
    Basch, J.M., Melchers, K.G., Kegelmann, J. & Lieb, L. (2020). Smile for the camera! The role of social presence and impression management in perceptions of technology-mediated interviews. Journal of Managerial Psychology, 35(4), 285-299. https://doi.org/10.1108/JMP-09-2018-0398
    Black, J. S., & van Esch, P. (2021). AI-enabled recruiting in the war for talent. Business Horizons, 64(4), 513-524. https://doi.org/10.1016/j.bushor.2021.02.015
    Bondarouk, T., & Brewster, C. (2016). Conceptualising the future of HRM and technology research. The International Journal of Human Resource Management, 27(21), 2652-2671. https://doi.org/10.1080/09585192.2016.1232296
    Burrell, J. (2016). How the machine ‘thinks’: Understanding opacity in machine learning algorithms. Big Data & Society. https://doi.org/10.1177/2053951715622512
    Charalambous, G., Fletcher, S., & Webb, P. (2016). The Development of a Scale to Evaluate Trust in Industrial Human-robot Collaboration. International Journal of Social Robotics, 8, 193-209. https://doi.org/10.1007/s12369-015-0333-8
    Chen, C. C., Lee, Y. H., Huang, T. C., & Ko, S. F. (2019). Effects of stress interviews on selection/recruitment function of employment interviews. Asia Pacific Journal of Human Resources, 57(1), 40-56. https://doi.org/10.1111/1744-7941.12170
    Felzmann, H., Villaronga, E. F., Lutz, C., & Tamò-Larrieux, A. (2019). Transparency you can trust: Transparency requirements for artificial intelligence between legal norms and contextual concerns. Big Data & Society, 6(1), 2053951719860542. https://doi.org/10.1177/2053951719860542
    Glikson, E., & Woolley, A. (2020). Human Trust in Artificial Intelligence: Review of Empirical Research. The Academy of Management Annals, 14, 627-660. https://doi.org/10.5465/annals.2018.0057
    Grover, S., Sahoo, S., Mehra, A., Avasthi, A., Tripathi, A., Subramanyan, A., Pattojoshi, A., Rao, G. P., Saha, G., Mishra, K. K., Chakraborty, K., Rao, N. P., Vaishnav, M., Singh, O. P., Dalal, P. K., Chadda, R. K., Gupta, R., Gautam, S., Sarkar, S., Sathyanarayana Rao, T. S., … Janardran Reddy, Y. C. (2020). Psychological impact of COVID-19 lockdown: An online survey from India. Indian Journal of Psychiatry, 62(4), 354–362. http://DOI:10.4103/psychiatry.IndianJPsychiatry_427_20
    Hoff, K. A., & Bashir, M. (2015). Trust in Automation: Integrating Empirical Evidence on Factors That Influence Trust. Human Factors, 57(3), 407–434. https://doi.org/10.1177/0018720814547570
    HRDA (n.d.). https://hrda.pro
    Ingold, P.V., Kleinmann, M., König, C.J., & Melchers, K.G. (2016). Transparency of Assessment Centers: Lower Criterion‐related Validity but Greater Opportunity to Perform? Personnel Psychology, 69, 467-497. https://doi.org/10.1111/peps.12105
    Johnson, D.G., & Verdicchio, M. (2017). Reframing AI Discourse. Minds and Machines, 27, 575-590. https://doi.org/10.1007/s11023-017-9417-6
    Jacovi, A., Marasović, A., Miller, T., & Goldberg, Y. (2021). Formalizing trust in artificial intelligence: Prerequisites, causes and goals of human trust in ai. In Proceedings of the 2021 ACM conference on fairness, accountability, and transparency (pp. 624-635). https://doi.org/10.1145/3442188.3445923
    Komiak, S. Y., & Benbasat, I. (2006). The effects of personalization and familiarity on trust and adoption of recommendation agents. MIS Quarterly, 941-960. https://doi.org/10.2307/25148760
    Kwak, S. G., & Kim, J. H. (2017). Central limit theorem: the cornerstone of modern statistics. Korean Journal of Anesthesiology, 70(2), 144–156. https://doi.org/10.4097/kjae.2017.70.2.144
    Lambrecht, A., & Tucker, C. (2019). Algorithmic bias? An empirical study into apparent gender-based discrimination in the display of STEM career ads. Management Science, 65, 2966-2981. https://doi.org/10.1287/mnsc.2018.3093
    Lin, J., Lu, Y., Wang, B., & Wei, K. K. (2011). The role of inter-channel trust transfer in establishing mobile commerce trust. Electronic Commerce Research and Applications, 10(6), 615-625. https://doi.org/10.1016/j.elerap.2011.07.008
    Long, N. (2001). Development sociology: Actor perspectives. New York, NY: Routledge.
    Langer, M., & König, C. (2021). Introducing a Multi-Stakeholder Perspective on Opacity, Transparency and Strategies to Reduce Opacity in Algorithm-Based Human Resource Management. Human Resource Management Review, 10. DOI:10.1016/j.hrmr.2021.100881
    Madhavan, P., & Wiegmann, D. A. (2007). Similarities and differences between human–human and human–automation trust: an integrative review. Theoretical Issues in Ergonomics Science, 8(4), 277-301. https://doi.org/10.1080/14639220500337708
    Oldeweme, A., Märtins, J., Westmattelmann, D., & Schewe, G. (2021). The role of transparency, trust, and social influence on uncertainty reduction in times of pandemics: empirical study on the adoption of COVID-19 tracing apps. Journal of Medical Internet Research, 23(2), e25893. https://doi.org/10.2196/25893
    Ore, O., & Sposato, M. (2021). Opportunities and risks of artificial intelligence in recruitment and selection. International Journal of Organizational Analysis. https://doi.org/10.1108/IJOA-07-2020-2291
    Ötting, S. K., & Maier, G. W. (2018). The importance of procedural justice in Human–Machine Interactions: Intelligent systems as new decision agents in organizations. Computers in Human Behavior, 89, 27–39. https://doi.org/10.1016/j.chb.2018.07.022
    Punyatoya, P. (2019). Effects of cognitive and affective trust on online customer behavior. Marketing Intelligence & Planning, 37, 80-96. https://doi.org/10.1108/MIP-02-2018-0058
    Rana, G., & Sharma, R. (2019). Emerging human resource management practices in Industry 4.0. Strategic HR Review. https://doi.org/10.1108/SHR-01-2019-0003
    Robinson, S. C. (2020). Trust, transparency, and openness: How inclusion of cultural values shapes Nordic national public policy strategies for artificial intelligence (AI). Technology in Society, 63, 101421. https://doi.org/10.1016/j.techsoc.2020.101421
    Rousseau, D. M., Sitkin, S. B., Burt, R. S., & Camerer, C. (1998). Not so different after all: A cross-discipline view of trust. Academy of Management Review, 23(3), 393-404. https://doi.org/10.5465/amr.1998.926617
    Samek, W., Müller, KR. (2019). Towards Explainable Artificial Intelligence. In Explainable AI: Interpreting, Explaining and Visualizing Deep Learning(pp. 5-22). Springer, Cham. https://doi.org/10.1007/978-3-030-28954-6_1
    https://doi.org/10.1007/978-3-030-28954-6_1
    Sivathanu, B., & Pillai, R. (2018). Smart HR 4.0–how industry 4.0 is disrupting HR. Human Resource Management International Digest. https://doi.org/10.1108/HRMID-04-2018-0059
    Suen, H. Y., Chen, M. Y. C., & Lu, S. H. (2019). Does the use of synchrony and artificial intelligence in video interviews affect interview ratings and applicant attitudes? Computers in Human Behavior, 98, 93-101. https://doi.org/10.1016/j.chb.2019.04.012
    Tene, O., & Polonetsky, J. (2014). A theory of creepy: Technology, privacy, and shifting social norms. Yale Journal of Law and Technology, 16(1), 2. Available at SSRN: https://ssrn.com/abstract=2326830
    Tavakol, M., & Dennick, R. (2011). Making sense of Cronbach's alpha. International Journal of Medical Education, 2, 53. https://dx.doi.org/10.5116%2Fijme.4dfb.8dfd
    Wang, N., Shen, X. L., & Sun, Y. (2013). Transition of electronic word-of-mouth services from web to mobile context: A trust transfer perspective. Decision Support Systems, 54(3), 1394-1403. https://doi.org/10.1016/j.dss.2012.12.015
    Wang, W., Qiu, L., Kim, D., & Benbasat, I. (2016). Effects of rational and social appeals of online recommendation agents on cognition- and affect-based trust. Decision Support Systems, 86, 48-60. https://doi.org/10.1016/j.dss.2016.03.007
    Wang, W., & Siau, K. (2018). Trusting Artificial Intelligence in Healthcare. Medicine, 31(2), 89-99.
    Wortham, R. H., Theodorou, A., & Bryson, J. J. (2016). What Does the Robot Think? Transparency as a Fundamental Design Requirement for Intelligent Systems. In Proceedings of the IJCAI Workshop on Ethics for Artificial Intelligence : International Joint Conference on Artificial Intelligence.
    Wortham, R. H., & Theodorou, A. (2017). Robot transparency, trust and utility. Connection Science, 29(3), 242-248. https://doi.org/10.1080/09540091.2017.1313816
    Verma, A., Bansal, M., & Verma, J. (2020). Industry 4.0: reshaping the future of HR. Strategic Direction, 36(5), 9-11. https://doi.org/10.1108/SD-12-2019-0235

    下載圖示
    QR CODE