Software composition analysis: Difference between revisions

Content deleted Content added
No edit summary
No edit summary
 
Line 129:
* The engine identifies the OSS components and their versions and usually stores this information in a database creating a catalog of OSS in use in the scanned application.
* This catalog is then compared to databases referencing known security vulnerabilities for each component, the licensing requirements for using the component, and the historical versions of the component.<ref>
{{Cite journalconference
|last1=Chen|first1=Yang
|last2=Santosa|first2=Andrew E
Line 138:
|date=2020
|title=A Machine Learning Approach for Vulnerability Curation
|journalconference=Proceedings of the 17th International Conference on Mining Software Repositories
|pages=32–42
|doi=10.1145/3379597.3387461
Line 159:
}}</ref> Versions of components are extracted from popular open source repositories such as [[GitHub]], [[Apache Maven|Maven]], [[Python Package Index|PyPi]], [[NuGet]], and many others.
* Modern SCA systems have incorporated advanced analysis techniques to improve accuracy and reduce false positives. Notable contributions include '''vulnerable method analysis''', which determines whether vulnerable methods identified in dependencies are actually reachable from the application code. This approach, pioneered by [[Asankhaya Sharma]] and colleagues, uses call graph analysis to trace execution paths from application entry points to vulnerability-specific sinks in third-party libraries.<ref>
{{Cite bookarxiv
|last1=Foo|first1=Darius
|last2=Yeo|first2=Jason
Line 166:
|title=The Dynamics of Software Composition Analysis
|date=2019
|arxiveprint=1909.00973
|journal=ArXiv
|class=cs.SE
|volume=abs/1909.00973
|arxiv=1909.00973
|doi=10.48550/ARXIV.1909.00973
}}</ref>
* '''Hybrid static-dynamic analysis''' techniques combine statically-constructed call graphs with dynamic instrumentation to improve the performance of false positive elimination. This modular approach addresses limitations of purely static analysis, which can introduce both false positives and false negatives on real-world projects.<ref>
{{Cite bookarxiv
|last1=Foo|first1=Darius
|last2=Yeo|first2=Jason
Line 179 ⟶ 177:
|title=The Dynamics of Software Composition Analysis
|date=2019
|arxiveprint=1909.00973
|journal=ArXiv
|class=cs.SE
|volume=abs/1909.00973
|arxiv=1909.00973
|doi=10.48550/ARXIV.1909.00973
}}</ref>
* '''Machine learning-based vulnerability curation''' automates the process of building and maintaining vulnerability databases by predicting the vulnerability-relatedness of data items from various sources such as bug tracking systems, commits, and mailing lists. These systems use self-training techniques to iteratively improve model quality and include deployment stability metrics to evaluate new models before production deployment.<ref>
{{Cite journalconference
|last1=Chen|first1=Yang
|last2=Santosa|first2=Andrew E
Line 194 ⟶ 190:
|date=2020
|title=A Machine Learning Approach for Vulnerability Curation
|journalconference=Proceedings of the 17th International Conference on Mining Software Repositories
|pages=32–42
|doi=10.1145/3379597.3387461
}}</ref>
* '''Natural language processing techniques''' for automated vulnerability identification analyze commit messages and bug reports to identify security-related issues that may not have been publicly disclosed. This approach uses machine learning classifiers trained on textual features extracted from development artifacts to discover previously unknown vulnerabilities in open-source libraries.<ref>
{{Cite journalconference
|last1=Zhou|first1=Yaqin
|last2=Sharma|first2=Asankhaya
|date=2017
|title=Automated identification of security issues from commit messages and bug reports
|journalconference=Proceedings of the 2017 11th Joint Meeting on Foundations of Software Engineering
|pages=914–919
|doi=10.1145/3106237.3106293
Line 228 ⟶ 224:
===Vulnerable method analysis===
Vulnerable method analysis addresses the problem of determining whether a vulnerability in a third-party library poses an actual risk to an application. Rather than simply detecting the presence of vulnerable libraries, this technique analyzes whether the specific vulnerable methods within those libraries are reachable from the application's execution paths. The approach involves constructing call graphs that map the relationships between application code and library methods, then determining if there exists a path from application entry points to vulnerability-specific sinks in the libraries.<ref>
{{Cite bookarxiv
|last1=Foo|first1=Darius
|last2=Yeo|first2=Jason
Line 235 ⟶ 231:
|title=The Dynamics of Software Composition Analysis
|date=2019
|arxiveprint=1909.00973
|journal=ArXiv
|class=cs.SE
|volume=abs/1909.00973
|arxiv=1909.00973
|doi=10.48550/ARXIV.1909.00973
}}</ref>
 
===Machine learning for vulnerability databases===
Traditional vulnerability databases rely on manual curation by security researchers, which can be time-intensive and may miss relevant vulnerabilities. Machine learning approaches automate this process by training models to predict whether data items from various sources (such as bug reports, commits, and mailing lists) are vulnerability-related. These systems implement complete pipelines from data collection through model training and prediction, with iterative improvement mechanisms that generate better models as new data becomes available.<ref>
{{Cite journalconference
|last1=Chen|first1=Yang
|last2=Santosa|first2=Andrew E
Line 252 ⟶ 246:
|date=2020
|title=A Machine Learning Approach for Vulnerability Curation
|journalconference=Proceedings of the 17th International Conference on Mining Software Repositories
|pages=32–42
|doi=10.1145/3379597.3387461
Line 259 ⟶ 253:
===Static analysis for library compatibility===
As SCA tools increasingly recommend library updates to address vulnerabilities, ensuring compatibility becomes critical. Advanced static analysis techniques can automatically detect [[API]] incompatibilities that would be introduced by library upgrades, enabling automated vulnerability remediation without breaking existing functionality. These lightweight analyses are designed to integrate into [[continuous integration]] and [[continuous delivery]] pipelines.<ref>
{{Cite journalconference
|last1=Foo|first1=Darius
|last2=Chua|first2=Hendy
Line 267 ⟶ 261:
|date=2018
|title=Efficient static checking of library updates
|journalconference=Proceedings of the 2018 26th ACM Joint Meeting on European Software Engineering Conference and Symposium on the Foundations of Software Engineering
|pages=791–796
|doi=10.1145/3236024.3275535
Line 357 ⟶ 351:
 
Modern SCA implementations have significantly improved accuracy through advanced analysis techniques. Vulnerable method analysis reduces false positives by determining actual reachability of vulnerable code paths, while machine learning approaches for vulnerability curation help maintain more comprehensive and up-to-date vulnerability databases. These advances address many traditional limitations of metadata-only approaches.<ref>
{{Cite bookarxiv
|last1=Foo|first1=Darius
|last2=Yeo|first2=Jason
Line 364 ⟶ 358:
|title=The Dynamics of Software Composition Analysis
|date=2019
|arxiveprint=1909.00973
|journal=ArXiv
|class=cs.SE
|volume=abs/1909.00973
|arxiv=1909.00973
|doi=10.48550/ARXIV.1909.00973
}}</ref>