Content deleted Content added
removed Category:Computers; added Category:Software using HotCat |
No edit summary |
||
(3 intermediate revisions by 3 users not shown) | |||
Line 17:
|issue=4
|pages=1–34
|article-number=59
|publisher=Springer
|doi=10.1007/s10664-021-09959-3
Line 60 ⟶ 61:
|pages=1453–1454
|doi=10.1093/bioinformatics/bth078
|pmid=14871861
|bibcode=2004Bioin..20.1453D
|citeseerx=10.1.1.114.3335
Line 120 ⟶ 122:
|pages=262–264
|doi=10.1109/MC.2020.3011082
|bibcode=2020Compr..53j.105O
|s2cid=222232127
|doi-access=free
Line 125 ⟶ 128:
* An engine scans the software source code, and the associated artifacts used to compile a software application.
* The engine identifies the OSS components and their versions and usually stores this information in a database creating a catalog of OSS in use in the scanned application.
* This catalog is then compared to databases referencing known security vulnerabilities for each component, the licensing requirements for using the component, and the historical versions of the
{{Cite conference
|last1=Chen|first1=Yang
|last2=Santosa|first2=Andrew E
|last3=Yi|first3=Ang Ming
|last4=Sharma|first4=Abhishek
|last5=Sharma|first5=Asankhaya
|last6=Lo|first6=David
|date=2020
|title=A Machine Learning Approach for Vulnerability Curation
|conference=Proceedings of the 17th International Conference on Mining Software Repositories
|pages=32–42
|doi=10.1145/3379597.3387461
}}</ref> For security vulnerability detection, this comparison is typically made against known security vulnerabilities (CVEs) that are tracked in the [[National Vulnerability Database]] (NVD). Some products use an additional proprietary database of vulnerabilities. For [[Legal_governance,_risk_management,_and_compliance#Legal_compliance|IP / Legal Compliance]], SCA products will extract and evaluate the type of licensing used for the OSS component.<ref>
{{Cite book
|last1=Duan|first1=Ruian
Line 142 ⟶ 158:
|chapter-url=https://dl.acm.org/doi/pdf/10.1145/3133956.3134048
}}</ref> Versions of components are extracted from popular open source repositories such as [[GitHub]], [[Apache Maven|Maven]], [[Python Package Index|PyPi]], [[NuGet]], and many others.
* Modern SCA systems have incorporated advanced analysis techniques to improve accuracy and reduce false positives. Notable contributions include '''vulnerable method analysis''', which determines whether vulnerable methods identified in dependencies are actually reachable from the application code. This approach, pioneered by [[Asankhaya Sharma]] and colleagues, uses call graph analysis to trace execution paths from application entry points to vulnerability-specific sinks in third-party libraries.<ref>
{{Cite arxiv
|last1=Foo|first1=Darius
|last2=Yeo|first2=Jason
|last3=Xiao|first3=Hao
|last4=Sharma|first4=Asankhaya
|title=The Dynamics of Software Composition Analysis
|date=2019
|eprint=1909.00973
|class=cs.SE
}}</ref>
* '''Hybrid static-dynamic analysis''' techniques combine statically-constructed call graphs with dynamic instrumentation to improve the performance of false positive elimination. This modular approach addresses limitations of purely static analysis, which can introduce both false positives and false negatives on real-world projects.<ref>
{{Cite arxiv
|last1=Foo|first1=Darius
|last2=Yeo|first2=Jason
|last3=Xiao|first3=Hao
|last4=Sharma|first4=Asankhaya
|title=The Dynamics of Software Composition Analysis
|date=2019
|eprint=1909.00973
|class=cs.SE
}}</ref>
* '''Machine learning-based vulnerability curation''' automates the process of building and maintaining vulnerability databases by predicting the vulnerability-relatedness of data items from various sources such as bug tracking systems, commits, and mailing lists. These systems use self-training techniques to iteratively improve model quality and include deployment stability metrics to evaluate new models before production deployment.<ref>
{{Cite conference
|last1=Chen|first1=Yang
|last2=Santosa|first2=Andrew E
|last3=Yi|first3=Ang Ming
|last4=Sharma|first4=Abhishek
|last5=Sharma|first5=Asankhaya
|last6=Lo|first6=David
|date=2020
|title=A Machine Learning Approach for Vulnerability Curation
|conference=Proceedings of the 17th International Conference on Mining Software Repositories
|pages=32–42
|doi=10.1145/3379597.3387461
}}</ref>
* '''Natural language processing techniques''' for automated vulnerability identification analyze commit messages and bug reports to identify security-related issues that may not have been publicly disclosed. This approach uses machine learning classifiers trained on textual features extracted from development artifacts to discover previously unknown vulnerabilities in open-source libraries.<ref>
{{Cite conference
|last1=Zhou|first1=Yaqin
|last2=Sharma|first2=Asankhaya
|date=2017
|title=Automated identification of security issues from commit messages and bug reports
|conference=Proceedings of the 2017 11th Joint Meeting on Foundations of Software Engineering
|pages=914–919
|doi=10.1145/3106237.3106293
}}</ref>
* The results are then made available to end users using different digital formats. The content and format depend on the SCA product and may include guidance to evaluate and interpret the risk, and recommendations especially when it concerns the legal requirements of open source components such as [[Copyleft#Strong_and_weak_copyleft|strong or weak copyleft]] licensing. The output may also contain a [[Software supply chain|Software Bill of Materials]] (SBOM) detailing all the open source components and associated attributes used in a software application<ref>
{{Cite journal
Line 154 ⟶ 216:
|doi=10.18278/jcip.3.1.8
|url=https://www.jcip1.org/uploads/1/3/6/5/136597491/jcip_3.1_online.pdf#page=117
}}</ref>
==Advanced techniques==
Since the early 2010s, researchers have developed several advanced techniques to improve the accuracy and efficiency of SCA tools:
===Vulnerable method analysis===
Vulnerable method analysis addresses the problem of determining whether a vulnerability in a third-party library poses an actual risk to an application. Rather than simply detecting the presence of vulnerable libraries, this technique analyzes whether the specific vulnerable methods within those libraries are reachable from the application's execution paths. The approach involves constructing call graphs that map the relationships between application code and library methods, then determining if there exists a path from application entry points to vulnerability-specific sinks in the libraries.<ref>
{{Cite arxiv
|last1=Foo|first1=Darius
|last2=Yeo|first2=Jason
|last3=Xiao|first3=Hao
|last4=Sharma|first4=Asankhaya
|title=The Dynamics of Software Composition Analysis
|date=2019
|eprint=1909.00973
|class=cs.SE
}}</ref>
===Machine learning for vulnerability databases===
Traditional vulnerability databases rely on manual curation by security researchers, which can be time-intensive and may miss relevant vulnerabilities. Machine learning approaches automate this process by training models to predict whether data items from various sources (such as bug reports, commits, and mailing lists) are vulnerability-related. These systems implement complete pipelines from data collection through model training and prediction, with iterative improvement mechanisms that generate better models as new data becomes available.<ref>
{{Cite conference
|last1=Chen|first1=Yang
|last2=Santosa|first2=Andrew E
|last3=Yi|first3=Ang Ming
|last4=Sharma|first4=Abhishek
|last5=Sharma|first5=Asankhaya
|last6=Lo|first6=David
|date=2020
|title=A Machine Learning Approach for Vulnerability Curation
|conference=Proceedings of the 17th International Conference on Mining Software Repositories
|pages=32–42
|doi=10.1145/3379597.3387461
}}</ref>
===Static analysis for library compatibility===
As SCA tools increasingly recommend library updates to address vulnerabilities, ensuring compatibility becomes critical. Advanced static analysis techniques can automatically detect [[API]] incompatibilities that would be introduced by library upgrades, enabling automated vulnerability remediation without breaking existing functionality. These lightweight analyses are designed to integrate into [[continuous integration]] and [[continuous delivery]] pipelines.<ref>
{{Cite conference
|last1=Foo|first1=Darius
|last2=Chua|first2=Hendy
|last3=Yeo|first3=Jason
|last4=Ang|first4=Ming Yi
|last5=Sharma|first5=Asankhaya
|date=2018
|title=Efficient static checking of library updates
|conference=Proceedings of the 2018 26th ACM Joint Meeting on European Software Engineering Conference and Symposium on the Foundations of Software Engineering
|pages=791–796
|doi=10.1145/3236024.3275535
}}</ref>
Line 189 ⟶ 299:
|isbn=978-1-7281-8535-4
|s2cid=236193144
}}</ref>
Line 239 ⟶ 348:
|s2cid=233582862
|url=https://ieeexplore.ieee.org/document/9821841
}}</ref>
Modern SCA implementations have significantly improved accuracy through advanced analysis techniques. Vulnerable method analysis reduces false positives by determining actual reachability of vulnerable code paths, while machine learning approaches for vulnerability curation help maintain more comprehensive and up-to-date vulnerability databases. These advances address many traditional limitations of metadata-only approaches.<ref>
{{Cite arxiv
|last1=Foo|first1=Darius
|last2=Yeo|first2=Jason
|last3=Xiao|first3=Hao
|last4=Sharma|first4=Asankhaya
|title=The Dynamics of Software Composition Analysis
|date=2019
|eprint=1909.00973
|class=cs.SE
}}</ref>
Line 306 ⟶ 427:
* [[Open-source license]]
* [[Software intelligence]]
* [[Asankhaya Sharma]]
* [[Static program analysis]]
* [[Call graph]]
==References==
{{reflist}}
[[Category:Information technology governance]]
[[Category:Software]]
|