1. <div id="8sgz1"><ol id="8sgz1"></ol></div>

        <em id="8sgz1"><label id="8sgz1"></label></em>
      2. <em id="8sgz1"><label id="8sgz1"></label></em>
        <em id="8sgz1"></em>
        <div id="8sgz1"><ol id="8sgz1"><mark id="8sgz1"></mark></ol></div>

        <button id="8sgz1"></button>
        west china medical publishers
        Author
        • Title
        • Author
        • Keyword
        • Abstract
        Advance search
        Advance search

        Search

        find Author "TIAN Pian" 2 results
        • An interpretable machine learning method for heart beat classification

          ObjectiveTo explore the application of Tsetlin Machine (TM) in heart beat classification. MethodsTM was used to classify the normal beats, premature ventricular contraction (PVC) and supraventricular premature beats (SPB) in the 2020 data set of China Physiological Signal Challenge. This data set consisted of the single-lead electrocardiogram data of 10 patients with arrhythmia. One patient with atrial fibrillation was excluded, and finally data of the other 9 patients were included in this study. The classification results were then analyzed. ResultsThe classification results showed that the average recognition accuracy of TM was 84.3%, and the basis of classification could be shown by the bit pattern interpretation diagram. ConclusionTM can explain the classification results when classifying heart beats. The reasonable interpretation of classification results can increase the reliability of the model and facilitate people's review and understanding.

          Release date:2023-03-01 04:15 Export PDF Favorites Scan
        • A heart sound segmentation method based on multi-feature fusion network

          Objective To propose a heart sound segmentation method based on multi-feature fusion network. Methods Data were obtained from the CinC/PhysioNet 2016 Challenge dataset (a total of 3 153 recordings from 764 patients, about 91.93% of whom were male, with an average age of 30.36 years). Firstly the features were extracted in time domain and time-frequency domain respectively, and reduced redundant features by feature dimensionality reduction. Then, we selected optimal features separately from the two feature spaces that performed best through feature selection. Next, the multi-feature fusion was completed through multi-scale dilated convolution, cooperative fusion, and channel attention mechanism. Finally, the fused features were fed into a bidirectional gated recurrent unit (BiGRU) network to heart sound segmentation results. Results The proposed method achieved precision, recall and F1 score of 96.70%, 96.99%, and 96.84% respectively. Conclusion The multi-feature fusion network proposed in this study has better heart sound segmentation performance, which can provide high-accuracy heart sound segmentation technology support for the design of automatic analysis of heart diseases based on heart sounds.

          Release date: Export PDF Favorites Scan
        1 pages Previous 1 Next

        Format

        Content

          1. <div id="8sgz1"><ol id="8sgz1"></ol></div>

            <em id="8sgz1"><label id="8sgz1"></label></em>
          2. <em id="8sgz1"><label id="8sgz1"></label></em>
            <em id="8sgz1"></em>
            <div id="8sgz1"><ol id="8sgz1"><mark id="8sgz1"></mark></ol></div>

            <button id="8sgz1"></button>
            欧美人与性动交α欧美精品