Saltminer System Reqirements


Each component of the system provides specific functionality that together provide the SaltMiner system.




Component Descriptions: Synchronization applications (Sync) The Sync applications are a set of programs that synchronize data between the various testing solutions and the internal elastic indices used by the ETL applications. These applications are specific to the testing solution as all solutions provide data in different formats and have different meanings to their data. The purpose of the Sync code is simply to get the data into the SaltMiner indices so that the ETL applications can transform it into the standard SaltMiner format. ETL applications A note on the Sync and ETL applications These applications are written in a combination of Python and .Net Core and are run as schedule jobs to keep data in sync. For security reviews the following may be worth noting: • All Sync application are run as schedule jobs and do not run as services. • As they are not services, they make only outbound HTTP(s) calls to the various testing solutions and to Elastic. • All credentials are stored in a settings file and are encrypted when the application is run the first time. Elasticsearch Kibana Kibana is used as the standard reporting system for SaltMiner. While other reporting interfaces can be used the default reports and user security work best with Kibana. Kibana architecture and security are fully documented at Data Structure, Sharding and Disaster Recovery Elastic Indices SaltMiner has two core sets of indices, the ones that are used for Sync operations between the external scanning solutions, i.e. Fortify SSC, WhiteSource, and indices that are created and populated by the ETL applications. Sync indices: All indices used for sync operations start with the product as a prefix, for example the Sync indices that are used with Fortify SSC information start with “ssc”, the WhiteSource indices start with “ws”. SaltMiner reporting indices The indices that are created and updated during the ETL process begin with app-. This pattern is followed so that any individual index can be deleted, recreated and repopulated without effecting the rest of the system. Indices structures need to change from time to time as the products they derive their data from change and this structure allows for minimal impact when these changes occur. The following is a list of the current indices used for the purpose. • app_vuls_ssc : The issues that have been generated by the ETL applications elated to Fortify SSC issues. • app_vuls_ws: The issues that are related to WhiteSource issues. • app_vuls_: As new products are added additional indices will be created to store these issues. SaltMiner reporting aliases In addition to the base indices SaltMiner also uses aliases to make querying of key data easier and less error prone. For example, the following aliases are created by default: • app-vuls_active_ssc: Alias that only shows issues that are currently active, i.e.. Not removed, filtered, or suppressed. • app_vuls_active_ws: Alias that shows active white source issues. SaltMiner Index Patterns: The core data source that visualizations use to show data in Kibana are index patterns. Index patterns provide the ability to combine multiple indices and aliases into one “virtual” view of the data. In the case of SaltMiner we use several Index Patterns: • app_vuls_active*: includes issues from all aliases that follow the app-vuls-active- format. Sharding Elasticsearch is built to scale to support extremely large data sets and many users. It is also built to be highly reliable and disaster resistant. A key part of realizing this is proper use of sharding. For all critical indices where scalability and resilience are important, SaltMiner relies on very intentional sharding. The following example show how SaltMiner utilized Sharding in a four (4) node elastic cluster. As more nodes are added the model can be expanded to maximize performance and reliability. In the following model, an index is split into four primary shards with each primary shard (P1 thru P4) being placed on a separate node (machine). For each primary shard three (3) replicas are created and placed on each of the three machines where the primary shard does not reside. By following this model each node can fully service any request or request can be spread among the cluster for optimal performance. Additionally, up to three nodes (machines) can completely fail and the cluster will continue to perform, although at a lower level. As new nodes are added to the cluster the replicas will be repopulated onto the new machines and the system will again perform as originally designed. For more information see: Scalability and resilience: clusters, nodes, and shards [] Network The following shows the network communications used by SaltMiner, Elasticsearch and Kibana. Authentication, authorization, and document level security Authentication SaltMiner relies on Elasticsearch features to provide appropriate authentication and role management of users. This approach provides more flexibility in how authentication can be achieved and is a more vetted and reliable solution than a custom implemented solution. More information on Elasticsearch authentication can be found here: User Authentication [] Document level security: Another key feature of Elasticsearch that SaltMiner utilizes is document level security. This feature allows users access to data to be restricted at the document level no matter how they query the database. This applies to users of the Kibana dashboards provided by SaltMiner or direct API access through the Elasticsearch API. By using this integrated document level security SaltMiner can provide data access via a rich dashboard in Kibana and via direct API access for use by external systems while still maintaining appropriate levels of data security. For more information see this article on Document Level Security [

SaltMiner: Our Solution for Enterprise Application Security ManagementLearn More
+ +