Introduction: The Dilemma of Digital Transformation Training
Company A, a mould manufacturer based in Ansan, Gyeonggi Province. With 35 employees and an annual turnover of 8 billion won, the CEO remarked, “Smart factories? What use is that to a small factory like ours?” Their digital transformation awareness score was 1.4 out of 5. It was a company for which the very term ‘digital transformation’ was unfamiliar. However, after receiving consultancy training, this company ranked in the top 30% for practical application. How was this possible?
There is also a contrasting example. Company B, an electronic components manufacturer in Incheon. With 120 employees, it had already introduced an MES system and had a dedicated DT department. Its DT awareness score was 3.8. By any measure, it was a company “ ready for digital transformation”. Yet, following the training, their level of practical application fell below average. The person in charge commented, “It was all information we already knew.”
The stories of these two companies challenge our common assumptions. Specifically, the assumption that “training is only effective if a company is already prepared for digital transformation”. Is that really the case?
This report was produced to answer that question. We utilised 13 different analytical methods, drawing on data from 219 small and medium-sized manufacturing companies that received digital transformation training through consultancy, 282 responses, and a four-year period (2022–2025). Why were 13 methods necessary?
Viewing a complex reality through a single lens inevitably leads to distortion. Correlation analysis reveals “overall trends”, but it cannot reveal “which combination of conditions leads to success”. Latent profile analysis distinguishes “types of companies”, but it does not show “how they change over time”. Therefore, we examined the data from every possible angle, ranging from descriptive statistics to regression analysis, latent profile analysis (LPA), qualitative comparative analysis (QCA), longitudinal analysis, structural topic modelling (STM), network analysis, IPA gap analysis, panel regression and propensity score matching (PSM), we examined the data from every possible angle.
The results included some surprises and some expected findings. There were reassuring outcomes, as well as uncomfortable truths.
For example, the good news that ‘repeated training is effective’ was accompanied by the uncomfortable truth that ‘a significant portion of that effect may be due to selection bias’. Whilst there was the surprising discovery that ‘training can be highly effective even with low DT readiness’, we also confirmed the expected reality that ‘88% of all companies are still at a beginner level in DT’.
This report explains all of this in language that anyone can understand. We have endeavoured to minimise statistical jargon and explain things using analogies and case studies. However, key figures have been retained for the sake of accuracy. At the end of each chapter, the key messages are summarised in the form of blockquotes, so busy readers can grasp the overall narrative simply by reading the final blockquote of each chapter.
Thirteen analytical methods may sound complex, but each is a tool designed to answer a single question. For example: “Is there a correlation between DT readiness and training effectiveness?” (correlation analysis); “How many distinct types of companies are there?” (LPA); “What is the combination of conditions that leads to high training effectiveness?” (QCA); and “Is repeated training truly effective?” (longitudinal analysis + PSM). One method per question, combined with triangulation—verifying whether the results from multiple methods point to the same conclusion—is the core strategy of this study. Here is a sneak peek at the key findings you will discover in this report:
- There are companies with high training effectiveness despite low DT readiness (Chapter 1)
- SMEs fall into three distinct categories, with 88% at a beginner level in DT (Chapter 2)
- Repeated training is effective, but the figures should not be taken at face value (Chapter 3)
- The effect of simply having ‘attended training’ is smaller than expected (Chapter 4)
- Practical training is twice as effective as theoretical training (Chapter 5)
- Companies’ challenges can be categorised into seven types, which are interconnected like dominoes (Chapter 6)
- Technology demands are interconnected like a network, which can be utilised in designing training tracks (Chapter 7)
- There is a six-step action plan that synthesises all these findings (Chapter 8)
Right then, let’s hear what the data has to say.
Chapter 1. Is the saying ‘Training is only effective if you’re ready for DT’ true?
A common-sense assumption: ‘Opportunity comes to those who are prepared’
Think back to what you learnt in maths. To study calculus, you must first understand functions, and to understand functions, you must be able to solve equations. You need a foundation to learn advanced material is a fundamental principle of education.
It is natural to apply the same logic to digital transformation (DT) training. “What’s the point of providing smart factory training to companies that don’t even know what DT is? Surely it would be more effective to first raise awareness, ensure the basic infrastructure is in place, and then provide training.” Whether they are policymakers or training managers, most people would think this way.
What the data tells us: “It’s not quite that simple”
However, the data tells a different story. When we measured the correlation between DT readiness (awareness + infrastructure) and training effectiveness (satisfaction, pre-post score difference, and practical application), it ranged from r = 0.06 to 0.18.
Let me explain what this figure means using an analogy. What if the correlation between exam results and height were r = 0.15? Could we say that “taller students perform better in exams”? Statistically, it is not completely zero, but in practical terms, it is correct to view it as having almost no relationship. The relationship between DT readiness and training effectiveness is the same. It is too weak to say there is a relationship, and yet, as it is not exactly zero, it is at that ambiguous level where one cannot say there is ‘none’.
If the assumption that ‘education is effective only when DT readiness is high’ were correct, this correlation would need to be at least r = 0.40 or higher. The fact that r ranges from 0.06 to 0.18 means that this assumption is either incorrect or, at the very least, too simplistic.
Multiple Paths to Success: “There is more than one way to get to Busan”
So, what exactly sets companies that achieve high training effectiveness apart? To answer this question, we used a method called Qualitative Comparative Analysis (QCA). Unlike regression analysis, QCA views data from the perspective that “a combination of multiple conditions produces the result”, rather than “a single cause produces the result”.
Imagine travelling from Seoul to Busan. You could take the KTX, drive a car, fly, or even cycle (though that would take a while).
The destination is the same, but there are many routes. QCA excels at identifying precisely this kind of “equifinality”.
The analysis revealed eight sufficient pathways leading to high educational effectiveness.
Pathway A: “Organisational Support”
~fDT * ~fSS * dept * edu_exp
Companies with low DT awareness and low smart system levels, but which have a dedicated DT department and prior experience of DT training
This describes companies such as Company A, introduced earlier. Although the company’s digital transformation level itself was low, the organisation had a system in place to support training. The dedicated department encouraged participation in training, and prior training experience served as a foundation for learning, thereby enhancing the effectiveness of the new training.
It is similar to a student who is not very good at English but whose grades improve when their parents pay for tuition fees and check their homework every day.
Path B: ‘Self-prepared’
fDT * fSS * ~dept * ~edu_exp
Companies with high awareness of DT and smart systems in place, but without a dedicated department or prior training experience
Such companies have already undertaken a significant portion of their digital transformation independently. Even without a dedicated department, the CEO or on-site managers already understand and are practising DT. Training serves to add structure to the knowledge they already possess.
It is the same principle as when someone who has taught themselves programming takes a formal course; they progress much faster than someone learning from scratch.
The remaining pathways: diverse recipes for success
Pathways 1 and 2 are variations of the ‘organisation-driven’ type, whilst pathways 3 to 5 are variations of the ‘self-prepared’ type. Pathways 6 and 7 are a blend of the two. Pathway 8 is a niche pathway under very specific conditions; although its scope (covS = 0.028) is small but consistency is high.
It is worth noting that fSZ (company size) appears in several pathways. In Pathway 2, large companies compensate for a lack of DT awareness, whilst in Pathways 5, 6 and 7, small companies (~fSZ) achieve success in combination with other conditions. Company size does not simply mean “the bigger, the better”; rather, its role varies depending on the combination with other conditions.
Details of the 8 Pathways
| # | Pathway Conditions | Consistency (inclS) | Coverage (covS) | Interpretation |
|---|---|---|---|---|
| 1 | ~fDT * ~fSS * dept * edu_exp | 0.753 | 0.126 | Low DT readiness but possesses organisational support (department + educational experience) |
| 2 | ~fDT * fSZ * dept * edu_exp | 0.758 | 0.090 | Low DT awareness, but compensated by company size and organisational support |
| 3 | fDT * fSS * ~dept * ~edu_exp | 0.763 | 0.171 | High DT readiness leads to high effectiveness even without organisational support |
| 4 | fDT * fED * ~dept * ~edu_exp | 0.785 | 0.150 | Organisational support unnecessary if DT awareness and training level are high |
| 5 | fDT * ~fSZ * ~dept * ~edu_exp | 0.753 | 0.169 | Autonomous learning pathway for small firms with high DT awareness |
| 6 | fED * ~fSZ * dept * edu_exp | 0.793 | 0.107 | Path for small enterprises combining training level and organisational support |
| 7 | ~fDT * fSS * fED * ~fSZ * ~edu_exp | 0.758 | 0.195 | Smart systems and education level compensate for lack of DT awareness |
| 8 | ~fDT * fSS * ~fSZ * dept * ~edu_exp | 0.776 | 0.028 | Niche pathway combining smart systems and departmental factors |
A key point to note in this table is that technical readiness (fDT, fSS) and organisational readiness (dept, edu_exp) complement each other symmetrically. If one of these is strong, high educational effectiveness can be achieved even if the other is weak. Much like warp and weft, it is a structure where if one side is weak, the other compensates for it.
Interpretation of these results: ‘Scope and reliability of the recipes’
Understanding the two key QCA indicators, Coverage and Consistency, allows for a deeper interpretation of these results. Looking at the combined statistics for all eight pathways:
- Coverage = 0.431: These eight pathways account for approximately 43% of cases with ‘high educational effectiveness’. To use a culinary analogy, there are many recipes for making delicious dishes, and the eight recipes we have identified account for 43% of all delicious dishes. The remaining 57% will be explained by other combinations of conditions that we have not yet measured (e.g. management commitment, trainer competence, industry characteristics, etc.).
- Consistency = 0.736: Approximately 74% of companies following this pathway actually demonstrated high training effectiveness. Whilst not perfect, this is a sufficiently significant level.
Organisational Environment (DV2) Results: ‘A More Appropriate Measurement Tool’
We performed the same analysis once more, changing only the dependent variable. When ‘organisational environment’ was used as the outcome variable instead of the ‘difference before and after Q3’, the overall coverage rose significantly to 0.716. This means that the same combination of conditions explains changes in the organisational environment much better.
Why is this the case? The difference before and after Q3 measures “how much answers to specific questions changed before and after the training”, which is closer to a short-term, individual reaction. In contrast, the organisational environment measures “whether the environment has become conducive to utilising the training content at an organisational level”, reflecting more structural and sustained changes. It is, therefore, a more suitable variable for capturing the influence of firm characteristics (DT readiness, organisational support).
Policy Implications: Do Not Give Up
These results carry significant policy implications. If a policy were adopted to “focus training on companies with high DT readiness”, this would amount to benefiting only the 12. 3% (P2). It is a case of ‘the rich getting richer and the poor getting poorer’. The QCA results suggest the opposite. Even for companies with low DT readiness, if they are required to designate a dedicated department and accumulate training experience, a pathway to high training effectiveness is opened up.
To use an analogy, it makes no sense to tell someone who has come to learn to swim, “Only those who can already swim a little should take lessons.” It is precisely those who cannot swim at all who need lessons the most. They simply require support equipment (organisational support).
Key Messages of This Chapter
Do not exclude companies with low DT readiness from training programmes. Once an organisational support system (dedicated department, prior training experience) is in place, high training effectiveness can be achieved even with low DT readiness. There is more than one path to success.
Chapter 2. SMEs Can Be Divided into Three Types
A closer look at 219 companies: “Not all SMEs are the same”
Schools have various types of pupils: those who enjoy studying and excel at it, those who do not particularly enjoy it but manage reasonably well, and those who have not even started yet. The same applies to companies. When data on DT awareness, smart system levels and training levels from 219 companies is fed into Latent Profile Analysis (LPA), three distinct types naturally emerge.
Put simply, LPA is an analytical method that “automatically groups companies with similar characteristics together”. It is not the researcher who specifies “divide into three”, but the data itself that indicates “three is the most natural”. The entropy value indicating the accuracy of this classification is 0.846, and a value of 0.8 or higher is considered a ‘good classification’. Cross-validation using the mclust package also confirmed that the 3-class solution was the most suitable.
Three Profiles
| Profile | Percentage | n | DT Awareness Level | Characteristics |
|---|---|---|---|---|
| P1: Low DT | 41.5% | 115 | ~1.6 points | Lack of both DT awareness and infrastructure |
| P2: High DT | 12.3% | 34 | ~3.5 points | DT transition underway, infrastructure secured |
| P3: DT Intermediate Level | 46.2% | 128 | ~2.2 points | Awareness exists but infrastructure is lacking |
Let’s hear each profile voiced by a fictional company.
P1 Company (Low DT Level, 41.5%):
‘To be honest, the term ‘digital transformation’ doesn’t really resonate with me. Our factory has been operating like this for 30 years and is running smoothly. Computers are only used in the office for accounting; we don’t need them on the production floor.’
Company P2 (High DT Level, 12.3%):
‘We introduced an MES system two years ago and are now expanding data-driven decision-making. I was hoping this training would cover advanced topics like AI-based quality prediction or digital twins, but I was disappointed that there was so much basic content.’
Company P3 (Medium DT Level, 46.2%):
‘I understand that a smart factory is necessary, but I don’t know where to start or how to go about it. We did install one piece of equipment using government grants, but as there’s no one to operate it, it’s just gathering dust.’
The uncomfortable reality revealed by the figures
The most striking figure in this table is 88%. If we combine P1 (41.5%) and P3 (46.2%), almost 90% of all companies are clustered at a DT awareness score of 2.2 or below. A score of 2.2 out of 5 is less than half.
This is akin to running an English language course where 88% of the students barely know the alphabet. If you were to open an ‘Advanced English Conversation’ class in this situation, it would be a waste of time for most students.
Conversely, the proportion of companies at a high level of DT (P2) is a mere 12.3%. These companies are already capable of standing on their own to some extent. The focus of training must necessarily be different.
What Entropy = 0.846 means
For readers interested in statistics, entropy is a metric representing classification accuracy, taking values between 0 and 1. The closer it is to 1, the more clearly each company belongs to a single profile; the closer it is to 0, the more ambiguous the boundaries are.
A value of 0.846 indicates high classification accuracy. In other words, these three types are not merely “artificial distinctions created by the researcher”, but ** distinct clusters that actually exist within the data**. Cross-validation using the mclust package also confirmed that a 3-class model is optimal. A 2-class model causes important information to be lost, whilst a 4-class model carries the risk of overfitting.
In-depth comparison of characteristics by profile
The three profiles differ from one another in more than just their DT recognition scores.
Typical characteristics of P1 (low-level) companies: Around 30 employees; the CEO makes all decisions; computerisation is limited to accounting software; production management is manual or via Excel. Upon hearing the term “digital transformation”, they immediately perceive it as “something that costs a lot of money”.
Typical characteristics of a P2 (High-Level) company: 100 or more employees, or a technology-intensive small business; extensive experience in driving digital transformation; currently operating MES or ERP systems; attempting data-driven decision-making. Expectations are high as much of the training content is “already known” to them.
Typical characteristics of P3 (intermediate level) companies: 50–80 employees; recognise the need for DT but do not know where to start; have introduced one or two pieces of equipment through government support schemes but utilisation is low. They are in a state of “willingness but lack of capability”.
Proposal: Differentiated training by type
The implications of this analysis are clear. A “one-size-fits-all” approach to providing the same training to all companies ‘one-size-fits-all’ approach is ineffective. The level, content and method of training must be tailored to the type of company. Just as schools organise classes by ability, corporate training also requires a tiered approach. Specific measures are discussed in Chapter 8.
Chapter 3. Repeated training is effective, but…
Good News: Improvement Demonstrated by the Figures
Of the 282 responses, 47 companies participated in consultancy training on two or more occasions. Some companies participated for the first time in 2022 and returned in 2025, whilst others participated every year without fail. Comparing these companies’ first and final participations:
- Training Level: +0.77 points improvement (p < .001, d = 0.60, moderate effect size)
- Combined DT Perception: +0.38 points improvement (p < .001, d = 0.54)
An effect size (d) of 0.60 can be illustrated using a gym analogy. When comparing someone who has exercised consistently for three months with someone who has not, the exerciser shows **a level of physical fitness improvement that places them above approximately 73% of the general population* . Whilst one cannot say they have ‘become a completely different person’, it is a level where one can say they have ‘definitely improved’. A score change of +0.77 also represents an improvement of over 15% on a 5-point scale, making it an educationally significant change.
The Pitfall of Regression to the Mean
There is another point to note. The results of the baseline regression analysis showed a strong regression to the mean effect, with R² = 0.275 and beta = -0.82. What does this mean?
If a student who scored 20 marks in the first exam scored 50 marks in the second exam, one might be pleased to have improved by 30 marks. However, part of the reason for the initial score of 20 could be that the student was ‘not feeling well or simply had a fluke’. The rise in the second exam score may not reflect an improvement in ability, but rather a return to the student’s original level of ability. This is regression to the mean.
A beta of -0.82 means that for every 1-point decrease in the initial score, the change is 0.82 points greater. Companies with lower initial levels show greater changes, but a significant portion of this is a statistical artefact.
Trajectory Types: ‘Not all companies change in the same way’
The mean (+0.77) is the ‘representative value’ for all companies, but it hides the stories of individual firms. Upon closely examining the change trajectories of the 47 companies that participated multiple times, four main types are identified:
- Simultaneous Improvement Type: Companies where awareness of DT and training levels rise together. This is the most ideal pattern.
- Training-Led Type: Companies where training levels rise first, followed by awareness of DT. This is the “I understood it once I actually tried it” pattern.
- Awareness-Leading Type: Companies where awareness of DT rises first, whilst the actual level of training lags behind. This is the “I understand the need, but putting it into practice is another matter…” pattern.
- Stagnant Type: Companies that show no meaningful change despite repeated participation. This may indicate merely token participation, or that the training content was not suited to the company’s circumstances.
These trajectory types have important policy implications. The Mutual Improvement Type and Training-Led Type are companies where training is functioning effectively. The Awareness-Led Type are companies that require additional practical support. The Stagnant Type are companies that need to re-examine the form or content of the training itself. Providing a single training programme identically to everyone and judging it dichotomously as ‘effective’ or ‘ineffective’ ignores these diverse trajectories.
It is akin to prescribing the same medicine: some patients improve within a week, some take three months, and for others, it has no effect. If we judge the medicine’s effectiveness solely by the average, we see only half the truth.
So, what is the pure effect of training?
It is difficult to state an exact figure, but a rough estimate is possible. If we remove selection bias and regression to the mean from the total improvement (+0.77), the pure educational effect is estimated to be around +0.3 to +0.4. This is still a significant magnitude. However, we must recognise that this is roughly half of the apparent figure of +0.77.
This is similar to measuring the effectiveness of diet pills. Even if it is true that ‘I lost 5 kg after taking the pills’, people who take them often combine this with dietary control and exercise. The pure effect of the pills themselves may be smaller than 5 kg. However, the fact that there is an effect cannot be denied.
Conclusion of this chapter
The effects of repeated training are clearly present. However, taking the simple before-and-after comparison figure (+0.77) at face value leads to an overestimation. Taking selection bias and regression to the mean into account, the net effect of the training is likely to be smaller than this. Nevertheless, the fact that there is an effect is significant in itself. From a policy perspective, the appropriate stance is to “encourage repeated training, but not to exaggerate its effects”.
Chapter 4. The true meaning of “having received DT training”
Surface-level results: Is prior experience better?
When comparing companies that answered “Yes” to the question “Have you previously received DT-related training?” with those that answered “No”, an interesting difference emerges.
At first glance, companies with prior experience of DT-related training show significantly higher satisfaction levels (p = .007, d = 0.34) than those receiving training for the first time. An effect size of d = 0.34 is classified as ‘small to medium’, indicating a meaningful difference. One might draw the intuitive conclusion that ‘experience enhances effectiveness’. This appears to follow the same logic as someone who has swum before learning more quickly in swimming lessons.
When selection bias is removed
However, the same question arises here as in Chapter 3. Are companies that have previously undergone DT training not simply companies that were already highly interested in and proactive about training? This is akin to finding it difficult to conclude that “private school education is better” simply because private school pupils achieve higher grades than those in state schools. After all, private schools tend to have many pupils from families that can afford to invest in education.
When we apply PSM to match companies with similar characteristics and then compare them, the p-value rises from 0.018 to 0.083. As p < 0.05 is generally considered statistically significant, 0.083 is on the borderline. In other words, the effect weakens when selection bias is controlled for.
Unexpected results of SF adoption: ‘Did/Did not’ is insufficient
The results of analysing the effect of Smart Factory (SF) adoption using PSM are even more dramatic. The results were non-significant both before and after matching, with an effect size of d < 0.11. This implies that the binary distinction of “adopted a Smart Factory or not” is virtually unrelated to the educational effect.
Binary vs Continuous Variables: The Importance of Measurement Precision
However, there is an interesting twist. Whilst “whether SF was adopted” (yes/no) was unrelated to the training effect, the “aggregate smart system score” (a continuous score across seven domains) showed a strongly significant relationship (p = .002) with the training effect.
Why is there such a difference? Among the companies that answered “implemented”, some simply installed a single piece of equipment and claimed to have “implemented” it, whilst others operate smart systems across their entire production process. “Did/Did not” cannot capture this difference. It is similar to how “hours of exercise per week” predicts health status better than “exercises/does not exercise”.
Ceiling Effect and Floor Effect
Another phenomenon to note during the analysis is the ceiling effect and the floor effect.
- Ceiling Effect: The average satisfaction score is 4.66 out of 5. Almost all companies responded that they were “satisfied”. With such a high average, it is difficult to distinguish differences between companies. It is akin to a test being so easy that everyone scores 95% or higher, making it impossible to distinguish who performed better.
- Floor effect: Conversely, 88% of the total DT perception scores are clustered at 2.2 points or below, making it difficult to distinguish differences at the lower end.
Due to these ceiling and floor effects, using the difference before and after Q3 or the organisational environment as the primary outcome variables rather than satisfaction enables a more accurate analysis.
Further evidence from panel regression
These results are consistent with the T8-1 panel regression analysis. In the panel regression, DT awareness (cognitive dimension) was only marginally significant (p = .073) regarding the educational effect, whereas the total smart system score (practical dimension) was highly significant at p = .002.
The implication is clear. ‘Doing’ is more important than ‘knowing’. The extent to which smart systems are actually implemented and operated (practice) is a stronger predictor of the practical application of training than simply knowing what DT is (awareness). This result also converges with Path B (including the fSS condition) in Chapter 1’s QCA analysis.
Key Message of This Chapter
Whilst it is true that “having training experience” enhances training effectiveness, a significant portion of this effect stems from selection bias. Furthermore, measuring levels on a continuum enables far more accurate predictions than binary measures such as “implemented/not implemented”. The “level of implementation” determines training effectiveness more than “awareness”.
Chapter 5. What SMEs Really Need
The Gap Between Training Needs and Reality: ‘The Thirsty Deer’
Just as water is most urgently needed when one is thirsty, training must be concentrated where it is most needed to be effective. So, where is the greatest need for digital transformation training in SMEs?
** IPA (Importance-Performance Analysis) Gap Analysis** Results:
- Training Need (Importance): 4.36 points (out of 5)
- Training Level (Performance): 2.52 points
- Gap: 1.84 points
A gap of 1.84 points on a 5-point scale is very large. Companies feel that “training is urgently needed” (4.36 points), yet the actual level of training falls short of even the mid-range (2.52 points). This corresponds to Q2, the ‘Focus on Improvement’ quadrant in the IPA matrix, indicating that this is the area requiring the most urgent improvement.
Among the seven Smart System domains, equipment automation recorded a score of 2.45, remaining the lowest. As equipment automation is the most fundamental area of digital transformation on the manufacturing floor, the fact that this score is the lowest implies that many companies have not even reached the starting line yet.
Year-on-Year Trends Across the 7 Areas: ‘Rapid Growth, Then Stagnation’
An examination of the year-on-year trends reveals an interesting pattern. Between 2022 and 2023, training levels rose significantly across most areas . This coincides with the period when consulting services began to expand in earnest. However, since 2023, progress has stagnated or seen only a slight increase. This is interpreted as indicating that, having harvested the ‘low-hanging fruit’ in the early stages, organisations are now struggling to drive deeper change. This signals a need to refine training programmes.
What kind of training is actually effective: ‘Cookbook vs. hands-on cooking’
Consulting firms offer various types of training courses. Comparing the difference in scores before and after Q3 by course type:
| Course Type | Difference Before/After Q3 | Characteristics |
|---|---|---|
| MES Practical | +1.93 | Field-oriented, hands-on |
| Smart Factory Practical | +1.71 | Case-based, implementation practice |
| Data Utilisation | +1.55 | Data collection/analysis practice |
| Process Improvement | +1.42 | On-site improvement project-based |
| DT Strategy | +1.28 | Formulation of management strategy |
| Introduction to DT | +1.03 | Theory-focused, introduction to concepts |
The MES Practical Course (+1.93) is approximately twice as effective as the Introduction to DT (+1.03).
This is similar to the difference when learning to cook. It is the difference between learning theory by reading cookbooks (Introduction to DT) and actually picking up a knife, preparing ingredients and trying to cook (MES Practical Course). Reading ten cookbooks will not improve your knife skills. You must try it yourself to improve.
Rethinking the Meaning of the Gap
Let us express the gap of 1.84 points in a different way. The perceived need for training stands at 87% (4.36/5.0), yet the actual level of training is only 50% (2.52/5.0). We are failing to meet even half of the need. It is akin to a state of chronic dehydration, where one needs 2 litres of water a day but only drinks 1 litre.
It is particularly noteworthy that equipment automation (2.45 points) consistently ranks lowest is noteworthy. Equipment automation is the most fundamental stage of a smart factory. The fact that this ranks lowest implies a lack of foundational capability for digital transformation. Before discussing advanced data analysis or AI applications, we must first establish the infrastructure to automatically collect data from equipment.
Implications of Effectiveness by Course Type
The approximately twofold difference between MES Practical Training (+1.93) and Introduction to DT (+1.03) carries significance beyond mere numbers. This is directly linked to the efficiency of investment in training. Even if the same amount of time and even the effectiveness varies by a factor of two depending on the content and method of training, the importance of curriculum design cannot be overstated.
However, MES Practical Training is not suitable for all companies.
If MES Practical Skills is taught directly to P1 (low-level) companies, they may struggle to keep up due to a lack of foundational knowledge. This effect is maximised when combined with differentiated training by type (Chapter 2).
Key Message of This Chapter
The gap between the perceived need for training and reality is very large (1.84 points). To bridge this gap, practical, hands-on training is required rather than theoretical instruction. Teaching “how to apply MES on the shop floor” is twice as effective as explaining “why digital transformation is necessary”. However, the practical training must be tailored to the company’s level.
Chapter 6. Seven Challenges Cited by SMEs
Numbers alone cannot fully capture the real concerns of companies. Therefore, we asked companies to freely write down the “challenges they face during the digital transformation process”. This is text data written in their own words, without any constraints. When these hundreds of text responses are analysed using a Structural Topic Model (STM), seven themes (topics) naturally emerge.
Put simply, STM is ‘a process where a computer automatically performs the task of reading hundreds of texts and summarising common themes’. However, unlike humans, it analyses all texts without bias and using the same criteria. It is akin to extracting seven common concerns from hundreds of letters. Why seven, specifically? After testing a range of topic counts from five to nine, seven proved to offer the most meaningful and interpretable structure.
1. Injection Moulding/Moulds/Machining – Industry-Specific Technical Barriers
“The injection mould manufacturing process is so complex that I don’t know how to capture the data. The conditions vary for each mould, and much of it relies on the intuition of skilled technicians.”
Moulds, injection moulding, precision machining, etc. ** specific to these sectors**. This field relies heavily on the experience and intuition of skilled technicians, making it fundamentally difficult to digitise and systematise this knowledge.
2. Data/Infrastructure – Lack of Digital Foundations
“Even if we wanted to collect data, we don’t have the systems in place. We’re still keeping production logs by hand. Our network infrastructure is poor, and we don’t have the space to house a server.”
Many companies lack the data collection infrastructure, which is the very foundation of digital transformation. Just as groundwork is required before constructing a building, an environment capable of collecting data must be established before discussing smart factories.
3. Specialist Staff – A Shortage of Personnel
“Even if we want to pursue DT, we lack the relevant personnel. When we try to recruit, there are no specialists willing to join SMEs, and training existing staff is a burden in terms of both time and cost.”
While large corporations have the resources to set up dedicated DT teams, the reality for SMEs is that one person has to juggle multiple roles. Expecting someone to manage production whilst also driving DT is an unreasonable demand.
4. Worker Adaptation – Resistance on the Shop Floor
“When we introduce a new system, workers on the shop floor resist. Older workers, in particular, ask, ‘We’ve always done it this way, so why change?’ It takes weeks just to learn how to use a tablet.”
This is not a technological issue, but a human issue. It is only natural for someone who has worked in the same way for 30 years to resist being suddenly told to input data using a tablet. This is both a training issue and a change management issue.
5. Smart Factory Implementation – Difficulties Inherent In The Process
‘We want to introduce a smart factory, but we don’t know where to start. Different vendors recommend different solutions, costs vary wildly, and we can’t judge which one is right for our factory.’
The complexity of the smart factory implementation process itself is the barrier. There is a significant information asymmetry, and decision-making is delayed because the financial losses from making the wrong choice are substantial.
6. Quality Control – The Challenges of Automation
‘Managing defect rates is our biggest headache. We’d like to monitor quality data in real time, but automating the inspection process is far too costly.’
Quality control is central to manufacturing, yet automating it and transitioning to a data-driven approach is no easy feat. For small and medium-sized enterprises (SMEs) in particular, the investment costs for inspection equipment are a significant burden.
7. Training System — Dissatisfaction with the Training Itself
“The training is far too general. We need specific training tailored to our industry and our scale, but most courses focus on case studies from large corporations. The theory is sound, but there is a lack of content that can be applied directly on the shop floor.”
This reflects dissatisfaction with the training itself. There are many complaints that training lacks practical relevance. This result aligns precisely with the finding in Chapter 5 regarding the “superior effectiveness of practical training (MES practical training +1.93 vs DT overview +1.03)”. The concerns expressed by companies in their written feedback are converging with the results of the numerical analysis.
The Interconnected Structure of the 7 Challenges
These seven challenges are not independent of one another. Without specialist personnel (3), it is difficult to decide on the introduction of a smart factory (5); even if introduced, without data infrastructure (2), it is difficult to achieve results; and because on-site workers cannot adapt (4), the automation of quality control (6) is also delayed. Ultimately, the training system (7) must resolve all of these issues, yet there is a lack of industry-specific (1) content.
It is like a domino effect. If one challenge is resolved, the others may be alleviated as well; however, if one is blocked, the whole process comes to a standstill. For example:
- Securing specialist personnel (3) -> speeds up the decision to adopt smart factories (5) -> enables the construction of data infrastructure (2) becomes possible
- If worker adaptation (4) training is carried out in parallel -> quality control automation (6) begins to function on the shop floor, and
- If industry-specific (1) know-how accumulates during this process -> the education and training system (7) itself becomes more sophisticated
Ultimately, rather than tackling these seven challenges one by one, it is more effective to understand the interconnections and approach them in a strategic sequence . The implementation strategies in Chapter 8 have been designed with this interconnected structure in mind.
Effects of 11 covariates: Which companies report which challenges more frequently
The strength of STM lies not merely in identifying topics, but in the ability to analyse which companies with specific characteristics discuss certain topics more frequently. We systematically screened 11 covariates, including year, DT awareness, training level, smart systems, presence of a DT department, DT training experience, corporate status, SF adoption, satisfaction, differences before and after Q3, and organisational environment.
Key findings:
Companies with high DT awareness mentioned the ‘Smart’ topic (Topic 5) more frequently. Although this may seem paradoxical, it means that the better a company understands DT, the ** accurately recognise the specific difficulties involved in implementing smart factories**. If you do not know, you do not realise the difficulty. The more you know, the deeper your concerns become.
Companies with a dedicated DT department mentioned the ‘training’ topic (Topic 7) more frequently. The existence of such a department implies that DT is being driven organisationally, and consequently, that the need for training is recognised at an organisational level. This is also consistent with QCA Path A (dept condition) in Chapter 1.
Companies with a high level of training saw a decrease in the “training” topic, whilst “defects” and “data” topics increased. This implies that the more training received, the more basic training-related grievances are resolved; however, in their place, companies become aware of more advanced challenges (such as data quality and automated defect detection).
In companies with a high level of smart systems, the “injection moulding” topics decreased whilst “data” topics increased. Whilst basic manufacturing barriers have been resolved to some extent, new challenges related to data sophistication are emerging. This is triangulated by the result from the panel regression that smart systems are a strong predictor of training effectiveness (p = .002).
Taking these covariate effects together, we can see that the nature of the difficulties cited changes as a company’s DT maturity increases. Initially, companies cite industry-specific technical barriers such as “injection moulding/moulding”, whilst at the intermediate stage they cite difficulties with the “introduction of smart factories” itself, and at the advanced stage they cite difficulties with “data quality” and “advanced analytics”. Training must also evolve in line with these maturity stages.
Key Message of This Chapter
The challenges faced by companies are structured around seven themes, which are interconnected. As the nature of the challenges cited varies according to a company’s DT maturity, training must provide different content for each maturity stage.
Chapter 7. How Are Technology Needs Interconnected?
Chapter 6 analysed the difficulties companies “express” through text. This chapter analyses the technologies companies “require” using a network approach. Whilst text analysis is a qualitative approach, network analysis is quantitative. If the results of these two analyses converge, this constitutes very strong evidence.
The key insight is this: the technologies that companies require do not exist in isolation, but are interconnected.
An 18-node network, density 0.719
When companies’ technology needs are represented as a network, a structure comprising 18 technology domains (nodes) that are closely interconnected emerges. A network density of 0.719 means that approximately 72% of all possible connections actually exist. To use an analogy, it is as if, in a room with 18 people, 72% of all possible handshakes have actually taken place. It is as though almost everyone has shaken hands with almost everyone else.
This implies that technology demand is highly interdependent. Few companies say, “We only need equipment automation.” Companies requiring equipment automation typically also need process monitoring, data management and automated quality control. Technology demand exists not in isolation but as a package.
Comparison of 4 Network Types: “Capturing the Same Scene with 4 Cameras”
We compared the same data using four different network configurations. Why create four? Because each approach emphasises different aspects:
| Network Type | Simple Explanation | Key Findings |
|---|---|---|
| Co-occurrence | ‘How many companies require both A and B?’ | Driven by high-frequency technology pairs |
| Lift | ‘Which technology pairs appear together more frequently than expected?’ | Discovery of hidden strong associations (including low-frequency technologies) |
| Phi | ‘If A is present, is B also present? If A is absent, is B also absent?’ | Distinguishes positive (+)/negative (-) directions, identifies substitution relationships |
| Jaccard | ‘Of all companies requiring either A or B, what proportion requires both?’ | Removes company size bias |
It is important to note that core edges appeared consistently across all four methods. The fact that the same structure emerges regardless of the analytical method used indicates that this technology demand structure is a robust pattern that actually exists.
Association Rules: ‘Companies requiring technology A also require B’
We identified directional technology associations through Apriori association rule analysis. For example, rules such as ‘Companies that responded that they require equipment automation are more than twice as likely as expected to also respond that they require process monitoring (Lift > 2.0)’ are derived.
In association rules, the Lift value represents the ‘actual frequency of co-occurrence relative to the expected frequency’. If Lift = 1.0, the demand for the two technologies is independent (unrelated); if Lift > 2.0, it means they occur together more than twice as often as expected. This is the same principle as the famous ‘nappy-beer law’, where customers buying nappies in a supermarket are more likely than expected to buy beer as well.
Such association rules can be directly utilised in curriculum design. For instance, recommending the “Process Monitoring Course” to companies taking the “Plant Automation Course”. This operates on the same principle as the “Customers who bought this also bought…” feature on online shopping sites. However, there is a difference: whilst recommendations on shopping sites aim to boost sales, the recommendations here are intended to enhance the company’s actual capabilities.
Community Structure: Basis for Designing Training Tracks
When community detection is carried out within a network, groups with closely linked technical needs are identified. These groups naturally form the basis for training tracks.
For example, if “Plant Automation – Process Monitoring – Quality Control” forms a single community, grouping these three topics into a single training track aligns with the company’s actual demand structure.
Centrality Analysis: Mandatory Training vs Optional Training
In the network, High-centrality nodes are core technologies connected to many other technologies. These technologies correspond to ** ‘mandatory training’** that all companies must learn.
Conversely, nodes with low centrality are technologies relevant only to specific companies. These should be offered as ‘optional training’.
For example, if ‘Data Collection/Management’ has the highest centrality, it should become a core subject in all training tracks. Conversely, if ‘AI-based Quality Prediction’ has low centrality, it should be offered as an optional subject for P2 (high-level) companies.
In this way, network analysis provides the basis for designing a curriculum portfolio based on data. Whereas curriculum design previously relied on the intuition of education experts or benchmarking, it can now be carried out based on actual corporate demand data.
Convergence of the Four Networks
The most significant finding of this analysis is that core edges appeared consistently when networks were constructed in four different ways. If strong connections observed in the co-occurrence network also appear identically in the Lift, Phi and Jaccard networks, we can be confident that these connections are not an artefact of the analytical method, but a structure that actually exists within the data.
This is akin to photographing the same scene with multiple cameras. If the same object appears in the same position in all four photographs taken with a standard camera, an infrared camera, a thermal imaging camera and an ultraviolet camera, then it undoubtedly exists.
Convergence between Network Analysis and Text Analysis
Comparing the seven topics identified in Chapter 6’s STM with the network communities in this chapter reveals significant convergence. For instance, the difficulties classified as the ‘Data’ topic in the STM and the closely connected community of ‘Data Management–Information Systems–Data Analysis’ in the network represent the same phenomenon captured by different methods.
The fact that quantitative analysis (network) and qualitative analysis (text) reveal the same structure provides strong evidence that this structure is not an artificial construct of the analytical methods, but a real entity existing in reality. This triangulation is a methodological strength of this entire study.
Key Message of This Chapter
Technology demands are not isolated but interconnected through networks. This interconnected structure can be utilised in the design of educational tracks, the distinction between compulsory and optional courses, and the recommendation of related skills. The data is revealing the framework of the educational curriculum.
Chapter 8. So, what should be done?
Over the past seven chapters, we have examined “what is effective and what are the issues”. In this chapter, we synthesise all these analytical findings to present concrete action plans for what actually needs to be done. These plans are based not on ‘it would be nice to have’, but on ‘the data tells us to do this’.
The six-step action plan to enhance the effectiveness of consulting training is as follows.
Step 1: Diagnosing Company Types
Before training, the DT readiness type of participating companies must be diagnosed Based on the three profiles identified in Chapter 2 (P1: Low Level, P2: High Level, P3: Medium Level), companies are classified using a simple pre-training questionnaire (3 questions on DT awareness + 7 questions on smart systems).
This is akin to a hospital conducting a diagnosis before treatment. Even for the same symptom of ‘coughing’, the prescription must differ depending on whether it is a cold or pneumonia. Currently, it is as if the same prescription is being issued to all companies without a proper diagnosis.
The preliminary diagnosis need not be complex. A total of 10 questions—3 on DT awareness and 7 on smart systems—is sufficient. These can be completed online in under five minutes when applying for training, and the company type is immediately determined using an automated classification algorithm. This small investment can significantly enhance the effectiveness of the training.
Step 2: Differentiated Curricula by Type
| Profile | Training Objective | Training Content | Training Method |
|---|---|---|---|
| P1 (Low Level, 41.5%) | Raising DT Awareness + Building Basic Competencies | DT Concepts, Success Stories, Basic Data Utilisation | Site Visits, Mentoring, Small-scale Workshops |
| P2 (High Level, 12.3%) | Advancement + Deepening | AI quality prediction, digital twins, data analysis | Project-based, combined with consulting, networking with peers |
| P3 (Medium Level, 46.2%) | Transition from Awareness to Action | MES practicals, process data collection, practical implementation of SF | Hands-on training, phased implementation roadmap, post-implementation support |
Teaching ‘AI-based quality prediction’ to P1 companies is like asking a student who doesn’t know the alphabet to write an English essay. Explaining ‘why DT is necessary’ to P2 companies is like teaching multiplication tables to a university student. Training tailored to each level maximises effectiveness.
Step 3: Simultaneous Establishment of an Organisational Support System
Let us recall the key findings from the QCA analysis in Chapter 1. Even with low DT readiness, organisational support (dedicated department + training experience), high training effectiveness can be achieved. This is the most actionable finding in this report. Whilst establishing DT infrastructure requires significant time and money, creating an organisational support system is something you can start immediately if you have the will.
Therefore, a programme to build an organisational support system must be run in parallel with training:
- Recommendation to appoint a DT lead (need not be full-time) – Zero cost, only requires commitment
- Pre- and post-training briefing sessions for senior management — Management interest is key to on-the-job application
- Establishment of an alumni network among companies that have completed the training — Information sharing and motivation among peers
- Mandatory preparation of a plan for applying training content to actual work — Establishing a structure where “what is learnt must be applied”
Step 4: Expand the Proportion of Practical Training (Theory:Practice = 3:7)
As confirmed in Chapter 5, the MES practical course (+1.93) is approximately twice as effective as the DT introduction course (+1.03). We propose reviewing the current theory-to-practical ratio of the training programme and increasing the proportion of practical training to 70% or more.
Where possible, practical sessions should ideally utilise actual data or processes from participating companies. Practising with data from ‘our factory’ rather than a ‘virtual factory’ reduces the gap between training and real-world application. Participants should be able to apply what they have learnt in the classroom directly on the shop floor on Monday morning. Only then will the training become but a ‘hands-on experience’.
Step 5: Encouraging Repeat Participation + Engaging Passive Companies
As confirmed in Chapter 3, repeat training is effective (even when selection bias is taken into account). The problem is that only companies that are already proactive participate repeatedly.
Measures to increase participation from passive companies (Type P1):
- Automatic notification of follow-up courses within three months of first participation (before the experience fades)
- Incentives for joint participation with companies in the same region and industry
- Linking training participation to government support schemes (training participation = extra points)
- Prioritising placement in tailored introductory courses to create a successful experience upon first participation
The key is a shift in perception: “Passive companies do not participate not because they lack the will, but because the barriers to entry are high” . Lowering these barriers will increase participation rates. As confirmed in the STM analysis in Chapter 6, companies with low DT awareness (P1) most frequently cite the difficulty of “not knowing where to start”. The key is to make that first step easier.
Step 6: Refining the Effectiveness Measurement System
The current satisfaction-focused evaluation has low discriminatory power due to the ceiling effect (average 4.66/5.0). We propose introducing the following multi-layered effectiveness measurement system:
| Level | Measurement Content | Measurement Timing | Measurement Tool |
|---|---|---|---|
| Level 1 | Reaction (Satisfaction) | Immediately after training | Existing survey (scales need to be differentiated) |
| Level 2 | Learning (Knowledge Change) | Before/after training | Comparison before and after Q3 (maintain current approach) |
| Level 3 | Behaviour (Application in the Workplace) | 3 months after training | Follow-up survey on organisational environment/application in the workplace |
| Level 4 | Results (Business Performance) | 6–12 months post-training | Objective indicators such as productivity, defect rates, revenue, etc. |
In particular, Levels 3 and 4 measure the actual effectiveness of the training, which is currently a weak area.
The interconnected structure of the 6 stages
These 6 stages are both sequential and cyclical. We diagnose the company type (Stage 1), provide a tailored curriculum (Stage 2), provide organisational support in parallel (Stage 3), deliver practice-oriented training (Stage 4), encourage repeat participation (Stage 5), and measure the impact (Stage 6), which is then fed back into the diagnosis at Stage 1. This creates a data-driven continuous improvement cycle.
This proposal is not merely a general call to ‘do better’. It is based on specific empirical evidence identified through 13 analytical methods. A summary of the analytical results underpinning each stage is as follows:
| Stage | Analytical Basis |
|---|---|
| 1. Corporate Type Diagnosis | (LPA 3 profiles) |
| 2. Differentiated Curriculum | (Profile-specific characteristics) + (Comparison of course types) |
| 3. Organisational Support | (QCA Path A: dept + edu_exp) |
| 4. Expansion of Practical Training | (MES +1.93 vs Introduction to DT +1.03) + (IPA gap) |
| 5. Encouraging repeat participation | (Longitudinal effects) + (Awareness of selection bias) |
| 6. Refining effectiveness measurement | (Ceiling effect) + (PSM) + (Discriminatory power of organisational environment) |
Conclusion: Hope revealed by the data
219 companies, 282 responses, 166 variables, 13 analytical methods. Behind all these figures lies the reality of SMEs struggling against the massive wave of digital transformation.
The findings of this report can be summarised in a single sentence:
Digital transformation training for SMEs is effective. However, it does not work in the same way for every company.
Contrary to common belief, companies with low digital transformation readiness are not necessarily unaffected by training. With organisational support, they can achieve significant results. Repeated training is also effective, though the impact is likely smaller than simple figures suggest when selection bias is taken into account. Practical training is nearly twice as effective as theoretical training, and continuous level measurement is far more accurate than binary (‘did/did not’) measurement.
And most importantly, with the right combination of support, practical training and repeated participation, even companies with low DT readiness can transform. Data from 219 companies confirms this.
Let’s return to the story of Company A. How did that mould manufacturer, with a DT awareness score of just 1.4, manage to rank in the top 30% for training effectiveness? The CEO participated in the training personally, appointed a DT lead, and was applied on the shop floor the very next day. It was not technology, but the organisation’s commitment that made the difference. This is a vivid example of how QCA Path A works in a real-world company.
What about Company B, on the other hand? Although its DT readiness was high, the training content was at a level they were already familiar with, and there was nothing new to apply after the training. What this company needed was not basic training but an advanced course. Company B clearly illustrates why the differentiation of training by type, as discussed in Chapter 2, is so important.
The dilemma surrounding digital transformation training for SMEs still persists. However, the data also provides clues to resolving that dilemma.
Finally, we must clearly acknowledge the limitations of this analysis. The 282 responses from 219 companies represent only a tiny fraction of South Korea’s entire SME manufacturing sector. As these were companies that voluntarily participated in the consultancy, the sample may be biased towards “companies interested in training”. Furthermore, as this was a self-reported survey, there is a lack of correlation with objective performance metrics (such as turnover and productivity). Despite these limitations, it can be said that this study, utilising four years of data and 13 analytical methods, provides the best empirical evidence currently available.
We hope this report will assist in finding that solution. We look forward to this data, which reflects the voices of 219 companies, leading to better education policies for the digital transformation of SMEs.
Finally, I would like to add one point. What has been repeatedly emphasised in this report is “do not look at things simplistically”. The simplistic assumption that “training is effective only when digital transformation readiness is high” has been refuted by the data. The simplistic conclusion that “repeated training is effective” has been qualified by selection bias. The simplistic expectation that “if SF has been introduced, it will be effective” has been rendered meaningless by the limitations of dichotomous measurement.
Reality is not simple. Yet, even within this complex reality, patterns exist. This is precisely why we employed 13 different analytical methods. Patterns that remain invisible through a single lens become clearly visible when multiple lenses are superimposed. And these patterns lead to actionable recommendations.
Appendix: Summary of Analytical Methods
This section briefly summarises the 13 analytical methods used in this report. The focus is on explaining which questions each method is designed to answer. Statistical details have been deliberately omitted; interested readers may refer to the detailed analysis reports for each track.
These 13 methods fall broadly into four categories:
- Exploratory Analysis: Methods for examining the overall structure and relationships within the data
- Typology/Path Analysis: Methods for classifying companies and identifying conditions for success
- Longitudinal/Causal Analysis: Methods for estimating changes over time and causal relationships
- Text/Network Analysis: Methods for analysing qualitative data and relational structures
| Track | Analytical Method | Simple Explanation | Question to be Answered |
|---|---|---|---|
| T0 | Descriptive Statistics, Correlation, Group Comparison | Examining the general characteristics of the data and relationships between variables | ‘Is there a relationship between DT readiness and the effectiveness of training?’ |
| T1 | Regression analysis, factor analysis | Relationships between causes and effects, validation of measurement tools | ‘Does DT awareness predict training effectiveness?’ |
| T2 | Latent Profile Analysis (LPA) | Automatic classification of similar companies | ‘How many types of companies are there?’ |
| T3 | Qualitative Comparative Analysis (QCA) | Identifying the ‘recipe’ for success | ‘What combination of conditions leads to high training effectiveness?’ |
| T4 | Longitudinal analysis | Tracking changes over time | ‘Does repeated training actually bring about change?’ |
| T5 | Structural Topic Modelling (STM) | Extracting common themes from text | ‘What is the structure of the difficulties companies report?’ |
| T6 | Network Analysis + Association Rules | Identifying the structural links between technology demands | ‘Which technology demands occur together?’ |
| T7 | IPA Gap Analysis | Measuring the difference between needs and reality | ‘In which areas is training most urgently needed?’ |
| T8-1 | Panel Regression | Estimating causal relationships over time | ‘Which factors predict the effectiveness of training?’ |
| T8-2-1 | PSM (DT Training Experience) | Re-measuring effects after controlling for selection bias | ‘Is the effect of training experience genuine?’ |
| T8-2-2 | PSM (SF Introduction) | Re-measuring effects after controlling for selection bias | ‘Does the introduction of SF enhance educational outcomes?’ |
| T8-2-3 | PSM (Multiple Participation) | Re-measuring effects after controlling for selection bias | ‘Do companies that participate repeatedly really improve?’ |
| Triangulation | Cross-checking results of multiple analyses | Examining the same phenomenon through multiple lenses | ‘Do multiple analyses point to the same conclusion?’ |
Key Figures at a Glance
We have compiled the key figures that appear repeatedly throughout this report in one place. Referring to the chapter and context in which each figure appears will be helpful when re-reading the report.
| Metric | Meaning | Chapter |
|---|---|---|
| r = 0.06–0.18 | Correlation between DT readiness and training effectiveness (very weak) | Chapter 1 |
| 8 pathways | Number of combinations constituting a sufficient condition for high training effectiveness | Chapter 1 |
| Coverage 0.431 / 0.716 | Explanatory scope of QCA solutions (Q3 difference / organisational environment) | Chapter 1 |
| 88% | Proportion of firms with DT awareness of 2.2 points or lower | Chapter 2 |
| Entropy 0.846 | LPA classification accuracy | Chapter 2 |
| +0.77 (d=0.60) | Improvement in training level following repeated training | Chapter 3 |
| SMD 0.738 | Magnitude of selection bias among firms participating multiple times | Chapter 3 |
| R²=.275, beta=-0.82 | Magnitude of mean reversion effect | Chapter 3 |
| p .007 -> .083 | Effect of DT training experience (Before/After PSM) | Chapter 4 |
| p = .002 | Smart System -> Training Effect (Panel Regression) | Chapter 4 |
| Gap 1.84 points | Difference between Training Need and Training Level | Chapter 5 |
| +1.93 vs +1.03 | Comparison of MES Practical vs DT Introduction Effects | Chapter 5 |
| 7 Topics | Number of issues derived from STM | Chapter 6 |
| Density 0.719 | Density of connections in the technology demand network | Chapter 7 |
Triangular Validation Relationships Among Analyses
These 13 analyses are not independent but are interrelated in a way that validates one another. The key triangular validation results are summarised as follows:
| Finding | Supporting Analyses | Convergence Strength |
|---|---|---|
| The relationship between DT readiness and training effectiveness is non-linear | T0 (weak correlation) + T3 (8 pathways) + T2 (differences by profile) | Strong |
| Organisational support enhances training effectiveness | T3 (Pathway A) + T5 (training topics) + T8-1 (panel regression) | Moderate |
| Practical training is more effective | T0 (comparison of course types) + T7 (IPA gap) + T5 (training topics) | Strong |
| The effect of repeated participation exists but is overestimated | T4 (+0.77) + T8-2-3 (SMD=0.738) + baseline regression (R²=.275) | Strong |
| Structured patterns exist in skills demand | T5 (7 topics) + T6 (convergence of 4 types of networks) + T7 (IPA gaps by domain) | Strong |
| Level of practice is more important than perception | T8-1 (p=.002) + T8-2-2 (SF dichotomous analogy) + T3 (Path B) | Strong |
There is also divergent evidence (non-convergent results), which points to areas requiring further research:
- Whether SF is adopted (dichotomous) is non-significant, but the level of smart systems (continuous) is strongly significant – Differences in measurement methods influence the results
- The effect of DT training experience differs before (p=.007) and after (p=.083) matching – Further research is needed on the magnitude of selection bias
- Satisfaction lacks discriminatory power due to a ceiling effect (M=4.66) – Improvements to the measurement tool are required